00:00:00.001 Started by upstream project "autotest-nightly" build number 3879 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3259 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.072 The recommended git tool is: git 00:00:00.072 using credential 00000000-0000-0000-0000-000000000002 00:00:00.074 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.096 Fetching changes from the remote Git repository 00:00:00.099 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.122 Using shallow fetch with depth 1 00:00:00.122 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.122 > git --version # timeout=10 00:00:00.156 > git --version # 'git version 2.39.2' 00:00:00.156 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.195 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.195 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.766 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.777 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.789 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:04.789 > git config core.sparsecheckout # timeout=10 00:00:04.801 > git read-tree -mu HEAD # timeout=10 00:00:04.817 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:04.835 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:04.836 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:04.973 [Pipeline] Start of Pipeline 00:00:04.988 [Pipeline] library 00:00:04.989 Loading library shm_lib@master 00:00:04.989 Library shm_lib@master is cached. Copying from home. 00:00:05.005 [Pipeline] node 00:00:05.024 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.026 [Pipeline] { 00:00:05.037 [Pipeline] catchError 00:00:05.039 [Pipeline] { 00:00:05.053 [Pipeline] wrap 00:00:05.063 [Pipeline] { 00:00:05.071 [Pipeline] stage 00:00:05.073 [Pipeline] { (Prologue) 00:00:05.112 [Pipeline] echo 00:00:05.121 Node: VM-host-SM9 00:00:05.140 [Pipeline] cleanWs 00:00:05.156 [WS-CLEANUP] Deleting project workspace... 00:00:05.156 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.161 [WS-CLEANUP] done 00:00:05.323 [Pipeline] setCustomBuildProperty 00:00:05.396 [Pipeline] httpRequest 00:00:05.414 [Pipeline] echo 00:00:05.415 Sorcerer 10.211.164.101 is alive 00:00:05.421 [Pipeline] httpRequest 00:00:05.424 HttpMethod: GET 00:00:05.424 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:05.425 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:05.425 Response Code: HTTP/1.1 200 OK 00:00:05.426 Success: Status code 200 is in the accepted range: 200,404 00:00:05.426 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.232 [Pipeline] sh 00:00:06.511 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.524 [Pipeline] httpRequest 00:00:06.550 [Pipeline] echo 00:00:06.552 Sorcerer 10.211.164.101 is alive 00:00:06.561 [Pipeline] httpRequest 00:00:06.565 HttpMethod: GET 00:00:06.565 URL: http://10.211.164.101/packages/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:06.565 Sending request to url: http://10.211.164.101/packages/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:06.577 Response Code: HTTP/1.1 200 OK 00:00:06.577 Success: Status code 200 is in the accepted range: 200,404 00:00:06.578 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:25.402 [Pipeline] sh 00:00:25.684 + tar --no-same-owner -xf spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:00:28.230 [Pipeline] sh 00:00:28.507 + git -C spdk log --oneline -n5 00:00:28.508 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:28.508 6c7c1f57e accel: add sequence outstanding stat 00:00:28.508 3bc8e6a26 accel: add utility to put task 00:00:28.508 2dba73997 accel: move get task utility 00:00:28.508 e45c8090e accel: improve accel sequence obj release 00:00:28.526 [Pipeline] writeFile 00:00:28.541 [Pipeline] sh 00:00:28.819 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:28.830 [Pipeline] sh 00:00:29.107 + cat autorun-spdk.conf 00:00:29.107 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.107 SPDK_TEST_NVMF=1 00:00:29.107 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.107 SPDK_TEST_URING=1 00:00:29.107 SPDK_TEST_VFIOUSER=1 00:00:29.107 SPDK_TEST_USDT=1 00:00:29.107 SPDK_RUN_ASAN=1 00:00:29.107 SPDK_RUN_UBSAN=1 00:00:29.107 NET_TYPE=virt 00:00:29.107 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.113 RUN_NIGHTLY=1 00:00:29.115 [Pipeline] } 00:00:29.131 [Pipeline] // stage 00:00:29.145 [Pipeline] stage 00:00:29.147 [Pipeline] { (Run VM) 00:00:29.160 [Pipeline] sh 00:00:29.438 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:29.438 + echo 'Start stage prepare_nvme.sh' 00:00:29.438 Start stage prepare_nvme.sh 00:00:29.438 + [[ -n 2 ]] 00:00:29.438 + disk_prefix=ex2 00:00:29.438 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:29.438 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:29.438 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:29.438 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.438 ++ SPDK_TEST_NVMF=1 00:00:29.438 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.438 ++ SPDK_TEST_URING=1 00:00:29.438 ++ SPDK_TEST_VFIOUSER=1 00:00:29.438 ++ SPDK_TEST_USDT=1 00:00:29.438 ++ SPDK_RUN_ASAN=1 00:00:29.438 ++ SPDK_RUN_UBSAN=1 00:00:29.438 ++ NET_TYPE=virt 00:00:29.438 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.438 ++ RUN_NIGHTLY=1 00:00:29.438 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:29.438 + nvme_files=() 00:00:29.438 + declare -A nvme_files 00:00:29.438 + backend_dir=/var/lib/libvirt/images/backends 00:00:29.438 + nvme_files['nvme.img']=5G 00:00:29.438 + nvme_files['nvme-cmb.img']=5G 00:00:29.438 + nvme_files['nvme-multi0.img']=4G 00:00:29.438 + nvme_files['nvme-multi1.img']=4G 00:00:29.438 + nvme_files['nvme-multi2.img']=4G 00:00:29.438 + nvme_files['nvme-openstack.img']=8G 00:00:29.438 + nvme_files['nvme-zns.img']=5G 00:00:29.438 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:29.438 + (( SPDK_TEST_FTL == 1 )) 00:00:29.438 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:29.438 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:29.438 + for nvme in "${!nvme_files[@]}" 00:00:29.438 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:29.438 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.438 + for nvme in "${!nvme_files[@]}" 00:00:29.438 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:29.438 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.438 + for nvme in "${!nvme_files[@]}" 00:00:29.438 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:29.696 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:29.696 + for nvme in "${!nvme_files[@]}" 00:00:29.696 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:29.696 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.696 + for nvme in "${!nvme_files[@]}" 00:00:29.696 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:29.696 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.696 + for nvme in "${!nvme_files[@]}" 00:00:29.696 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:29.696 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.696 + for nvme in "${!nvme_files[@]}" 00:00:29.696 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:29.954 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.954 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:29.954 + echo 'End stage prepare_nvme.sh' 00:00:29.954 End stage prepare_nvme.sh 00:00:29.965 [Pipeline] sh 00:00:30.243 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:30.243 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:00:30.501 00:00:30.501 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:30.501 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:30.501 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:30.501 HELP=0 00:00:30.501 DRY_RUN=0 00:00:30.501 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:30.501 NVME_DISKS_TYPE=nvme,nvme, 00:00:30.501 NVME_AUTO_CREATE=0 00:00:30.501 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:30.501 NVME_CMB=,, 00:00:30.501 NVME_PMR=,, 00:00:30.501 NVME_ZNS=,, 00:00:30.501 NVME_MS=,, 00:00:30.501 NVME_FDP=,, 00:00:30.501 SPDK_VAGRANT_DISTRO=fedora38 00:00:30.501 SPDK_VAGRANT_VMCPU=10 00:00:30.501 SPDK_VAGRANT_VMRAM=12288 00:00:30.501 SPDK_VAGRANT_PROVIDER=libvirt 00:00:30.501 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:30.501 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:30.501 SPDK_OPENSTACK_NETWORK=0 00:00:30.501 VAGRANT_PACKAGE_BOX=0 00:00:30.501 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:30.501 FORCE_DISTRO=true 00:00:30.501 VAGRANT_BOX_VERSION= 00:00:30.501 EXTRA_VAGRANTFILES= 00:00:30.501 NIC_MODEL=e1000 00:00:30.501 00:00:30.501 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:30.501 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:33.784 Bringing machine 'default' up with 'libvirt' provider... 00:00:34.043 ==> default: Creating image (snapshot of base box volume). 00:00:34.043 ==> default: Creating domain with the following settings... 00:00:34.043 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720676809_1463580341b861ee16e6 00:00:34.043 ==> default: -- Domain type: kvm 00:00:34.043 ==> default: -- Cpus: 10 00:00:34.043 ==> default: -- Feature: acpi 00:00:34.043 ==> default: -- Feature: apic 00:00:34.043 ==> default: -- Feature: pae 00:00:34.043 ==> default: -- Memory: 12288M 00:00:34.043 ==> default: -- Memory Backing: hugepages: 00:00:34.043 ==> default: -- Management MAC: 00:00:34.043 ==> default: -- Loader: 00:00:34.043 ==> default: -- Nvram: 00:00:34.043 ==> default: -- Base box: spdk/fedora38 00:00:34.043 ==> default: -- Storage pool: default 00:00:34.043 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720676809_1463580341b861ee16e6.img (20G) 00:00:34.043 ==> default: -- Volume Cache: default 00:00:34.043 ==> default: -- Kernel: 00:00:34.043 ==> default: -- Initrd: 00:00:34.043 ==> default: -- Graphics Type: vnc 00:00:34.043 ==> default: -- Graphics Port: -1 00:00:34.043 ==> default: -- Graphics IP: 127.0.0.1 00:00:34.043 ==> default: -- Graphics Password: Not defined 00:00:34.043 ==> default: -- Video Type: cirrus 00:00:34.043 ==> default: -- Video VRAM: 9216 00:00:34.043 ==> default: -- Sound Type: 00:00:34.043 ==> default: -- Keymap: en-us 00:00:34.043 ==> default: -- TPM Path: 00:00:34.043 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:34.043 ==> default: -- Command line args: 00:00:34.043 ==> default: -> value=-device, 00:00:34.043 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:34.043 ==> default: -> value=-drive, 00:00:34.043 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:34.043 ==> default: -> value=-device, 00:00:34.043 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.043 ==> default: -> value=-device, 00:00:34.043 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:34.043 ==> default: -> value=-drive, 00:00:34.043 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:34.043 ==> default: -> value=-device, 00:00:34.043 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.043 ==> default: -> value=-drive, 00:00:34.043 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:34.043 ==> default: -> value=-device, 00:00:34.043 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.043 ==> default: -> value=-drive, 00:00:34.043 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:34.043 ==> default: -> value=-device, 00:00:34.043 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.043 ==> default: Creating shared folders metadata... 00:00:34.043 ==> default: Starting domain. 00:00:35.422 ==> default: Waiting for domain to get an IP address... 00:00:53.560 ==> default: Waiting for SSH to become available... 00:00:53.560 ==> default: Configuring and enabling network interfaces... 00:00:56.095 default: SSH address: 192.168.121.9:22 00:00:56.095 default: SSH username: vagrant 00:00:56.095 default: SSH auth method: private key 00:00:58.657 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:06.775 ==> default: Mounting SSHFS shared folder... 00:01:07.713 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:07.713 ==> default: Checking Mount.. 00:01:09.091 ==> default: Folder Successfully Mounted! 00:01:09.091 ==> default: Running provisioner: file... 00:01:09.658 default: ~/.gitconfig => .gitconfig 00:01:10.226 00:01:10.226 SUCCESS! 00:01:10.226 00:01:10.226 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:10.226 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:10.226 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:10.226 00:01:10.235 [Pipeline] } 00:01:10.253 [Pipeline] // stage 00:01:10.263 [Pipeline] dir 00:01:10.264 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:10.266 [Pipeline] { 00:01:10.280 [Pipeline] catchError 00:01:10.282 [Pipeline] { 00:01:10.296 [Pipeline] sh 00:01:10.576 + vagrant ssh-config --host vagrant 00:01:10.576 + sed -ne /^Host/,$p 00:01:10.576 + tee ssh_conf 00:01:13.865 Host vagrant 00:01:13.865 HostName 192.168.121.9 00:01:13.865 User vagrant 00:01:13.865 Port 22 00:01:13.865 UserKnownHostsFile /dev/null 00:01:13.865 StrictHostKeyChecking no 00:01:13.865 PasswordAuthentication no 00:01:13.865 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:13.865 IdentitiesOnly yes 00:01:13.865 LogLevel FATAL 00:01:13.865 ForwardAgent yes 00:01:13.865 ForwardX11 yes 00:01:13.865 00:01:13.879 [Pipeline] withEnv 00:01:13.881 [Pipeline] { 00:01:13.898 [Pipeline] sh 00:01:14.178 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:14.178 source /etc/os-release 00:01:14.178 [[ -e /image.version ]] && img=$(< /image.version) 00:01:14.178 # Minimal, systemd-like check. 00:01:14.178 if [[ -e /.dockerenv ]]; then 00:01:14.178 # Clear garbage from the node's name: 00:01:14.178 # agt-er_autotest_547-896 -> autotest_547-896 00:01:14.178 # $HOSTNAME is the actual container id 00:01:14.178 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:14.178 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:14.178 # We can assume this is a mount from a host where container is running, 00:01:14.178 # so fetch its hostname to easily identify the target swarm worker. 00:01:14.178 container="$(< /etc/hostname) ($agent)" 00:01:14.178 else 00:01:14.178 # Fallback 00:01:14.178 container=$agent 00:01:14.178 fi 00:01:14.178 fi 00:01:14.178 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:14.178 00:01:14.189 [Pipeline] } 00:01:14.210 [Pipeline] // withEnv 00:01:14.219 [Pipeline] setCustomBuildProperty 00:01:14.231 [Pipeline] stage 00:01:14.233 [Pipeline] { (Tests) 00:01:14.249 [Pipeline] sh 00:01:14.529 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:14.803 [Pipeline] sh 00:01:15.084 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:15.359 [Pipeline] timeout 00:01:15.359 Timeout set to expire in 30 min 00:01:15.361 [Pipeline] { 00:01:15.380 [Pipeline] sh 00:01:15.664 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:16.232 HEAD is now at 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:16.249 [Pipeline] sh 00:01:16.528 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:16.801 [Pipeline] sh 00:01:17.082 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:17.359 [Pipeline] sh 00:01:17.639 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:17.897 ++ readlink -f spdk_repo 00:01:17.897 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:17.897 + [[ -n /home/vagrant/spdk_repo ]] 00:01:17.897 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:17.897 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:17.897 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:17.897 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:17.897 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:17.897 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:17.897 + cd /home/vagrant/spdk_repo 00:01:17.897 + source /etc/os-release 00:01:17.897 ++ NAME='Fedora Linux' 00:01:17.897 ++ VERSION='38 (Cloud Edition)' 00:01:17.897 ++ ID=fedora 00:01:17.897 ++ VERSION_ID=38 00:01:17.897 ++ VERSION_CODENAME= 00:01:17.897 ++ PLATFORM_ID=platform:f38 00:01:17.897 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:17.897 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.897 ++ LOGO=fedora-logo-icon 00:01:17.897 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:17.897 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.897 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:17.897 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.897 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.897 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.897 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:17.897 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.897 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:17.897 ++ SUPPORT_END=2024-05-14 00:01:17.897 ++ VARIANT='Cloud Edition' 00:01:17.897 ++ VARIANT_ID=cloud 00:01:17.897 + uname -a 00:01:17.897 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:17.897 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:18.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:18.154 Hugepages 00:01:18.154 node hugesize free / total 00:01:18.412 node0 1048576kB 0 / 0 00:01:18.412 node0 2048kB 0 / 0 00:01:18.412 00:01:18.412 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.412 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:18.412 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:18.412 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:18.412 + rm -f /tmp/spdk-ld-path 00:01:18.412 + source autorun-spdk.conf 00:01:18.412 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.412 ++ SPDK_TEST_NVMF=1 00:01:18.412 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.412 ++ SPDK_TEST_URING=1 00:01:18.412 ++ SPDK_TEST_VFIOUSER=1 00:01:18.412 ++ SPDK_TEST_USDT=1 00:01:18.412 ++ SPDK_RUN_ASAN=1 00:01:18.412 ++ SPDK_RUN_UBSAN=1 00:01:18.412 ++ NET_TYPE=virt 00:01:18.412 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.412 ++ RUN_NIGHTLY=1 00:01:18.412 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.412 + [[ -n '' ]] 00:01:18.412 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:18.412 + for M in /var/spdk/build-*-manifest.txt 00:01:18.412 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.412 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.412 + for M in /var/spdk/build-*-manifest.txt 00:01:18.412 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.412 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.412 ++ uname 00:01:18.412 + [[ Linux == \L\i\n\u\x ]] 00:01:18.412 + sudo dmesg -T 00:01:18.412 + sudo dmesg --clear 00:01:18.412 + dmesg_pid=5162 00:01:18.412 + [[ Fedora Linux == FreeBSD ]] 00:01:18.412 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.412 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.412 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.412 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.412 + sudo dmesg -Tw 00:01:18.412 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.412 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.412 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.412 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.412 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.412 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.412 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.412 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.412 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.412 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.412 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:18.412 Test configuration: 00:01:18.412 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.412 SPDK_TEST_NVMF=1 00:01:18.412 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.412 SPDK_TEST_URING=1 00:01:18.412 SPDK_TEST_VFIOUSER=1 00:01:18.412 SPDK_TEST_USDT=1 00:01:18.412 SPDK_RUN_ASAN=1 00:01:18.412 SPDK_RUN_UBSAN=1 00:01:18.412 NET_TYPE=virt 00:01:18.412 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.671 RUN_NIGHTLY=1 05:47:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:18.671 05:47:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.671 05:47:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.671 05:47:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.671 05:47:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.671 05:47:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.671 05:47:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.671 05:47:34 -- paths/export.sh@5 -- $ export PATH 00:01:18.671 05:47:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.671 05:47:34 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:18.671 05:47:34 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:18.671 05:47:34 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720676854.XXXXXX 00:01:18.671 05:47:34 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720676854.srAKry 00:01:18.671 05:47:34 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:18.671 05:47:34 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:18.671 05:47:34 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:18.671 05:47:34 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:18.671 05:47:34 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.671 05:47:34 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:18.671 05:47:34 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:18.671 05:47:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.671 05:47:34 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:01:18.671 05:47:34 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:18.671 05:47:34 -- pm/common@17 -- $ local monitor 00:01:18.671 05:47:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.671 05:47:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.671 05:47:34 -- pm/common@25 -- $ sleep 1 00:01:18.671 05:47:34 -- pm/common@21 -- $ date +%s 00:01:18.671 05:47:34 -- pm/common@21 -- $ date +%s 00:01:18.671 05:47:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720676854 00:01:18.671 05:47:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720676854 00:01:18.671 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720676854_collect-vmstat.pm.log 00:01:18.671 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720676854_collect-cpu-load.pm.log 00:01:19.608 05:47:35 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:19.608 05:47:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.608 05:47:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.608 05:47:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:19.608 05:47:35 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.608 Thu Jul 11 05:47:35 AM UTC 2024 00:01:19.608 05:47:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.608 v24.09-pre-200-g9937c0160 00:01:19.608 05:47:35 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:19.608 05:47:35 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:19.608 05:47:35 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:19.608 05:47:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:19.608 05:47:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.608 ************************************ 00:01:19.608 START TEST asan 00:01:19.608 ************************************ 00:01:19.608 using asan 00:01:19.608 05:47:35 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:19.608 00:01:19.608 real 0m0.000s 00:01:19.608 user 0m0.000s 00:01:19.608 sys 0m0.000s 00:01:19.608 05:47:35 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:19.608 ************************************ 00:01:19.608 END TEST asan 00:01:19.608 05:47:35 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.608 ************************************ 00:01:19.608 05:47:35 -- common/autotest_common.sh@1142 -- $ return 0 00:01:19.608 05:47:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.608 05:47:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.608 05:47:35 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:19.608 05:47:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:19.608 05:47:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.608 ************************************ 00:01:19.608 START TEST ubsan 00:01:19.608 ************************************ 00:01:19.608 using ubsan 00:01:19.608 05:47:35 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:19.608 00:01:19.608 real 0m0.000s 00:01:19.608 user 0m0.000s 00:01:19.608 sys 0m0.000s 00:01:19.608 05:47:35 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:19.608 ************************************ 00:01:19.608 05:47:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.608 END TEST ubsan 00:01:19.608 ************************************ 00:01:19.608 05:47:35 -- common/autotest_common.sh@1142 -- $ return 0 00:01:19.608 05:47:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:19.608 05:47:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.608 05:47:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.608 05:47:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:19.608 05:47:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:19.608 05:47:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:19.608 05:47:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:19.608 05:47:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:19.608 05:47:35 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:01:19.866 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:19.866 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:20.432 Using 'verbs' RDMA provider 00:01:36.271 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:48.470 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:48.470 Creating mk/config.mk...done. 00:01:48.470 Creating mk/cc.flags.mk...done. 00:01:48.470 Type 'make' to build. 00:01:48.470 05:48:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:48.470 05:48:03 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:48.470 05:48:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:48.470 05:48:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.470 ************************************ 00:01:48.470 START TEST make 00:01:48.470 ************************************ 00:01:48.470 05:48:03 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:48.470 make[1]: Nothing to be done for 'all'. 00:01:49.036 The Meson build system 00:01:49.036 Version: 1.3.1 00:01:49.036 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:01:49.036 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:49.036 Build type: native build 00:01:49.036 Project name: libvfio-user 00:01:49.036 Project version: 0.0.1 00:01:49.036 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.036 C linker for the host machine: cc ld.bfd 2.39-16 00:01:49.036 Host machine cpu family: x86_64 00:01:49.036 Host machine cpu: x86_64 00:01:49.036 Run-time dependency threads found: YES 00:01:49.036 Library dl found: YES 00:01:49.036 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.036 Run-time dependency json-c found: YES 0.17 00:01:49.036 Run-time dependency cmocka found: YES 1.1.7 00:01:49.036 Program pytest-3 found: NO 00:01:49.036 Program flake8 found: NO 00:01:49.036 Program misspell-fixer found: NO 00:01:49.036 Program restructuredtext-lint found: NO 00:01:49.036 Program valgrind found: YES (/usr/bin/valgrind) 00:01:49.036 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.036 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.036 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.036 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.036 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:01:49.036 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:01:49.036 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.036 Build targets in project: 8 00:01:49.036 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:49.037 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:49.037 00:01:49.037 libvfio-user 0.0.1 00:01:49.037 00:01:49.037 User defined options 00:01:49.037 buildtype : debug 00:01:49.037 default_library: shared 00:01:49.037 libdir : /usr/local/lib 00:01:49.037 00:01:49.037 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.604 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:49.604 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:49.604 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:49.604 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:49.604 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:49.604 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:49.604 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:49.604 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:49.604 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:49.604 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:49.604 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:49.864 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:49.864 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:49.864 [13/37] Compiling C object samples/client.p/client.c.o 00:01:49.864 [14/37] Compiling C object samples/null.p/null.c.o 00:01:49.864 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:49.864 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:49.864 [17/37] Linking target samples/client 00:01:49.864 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:49.864 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:49.864 [20/37] Compiling C object samples/server.p/server.c.o 00:01:49.864 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:49.864 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:49.864 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:49.864 [24/37] Linking target lib/libvfio-user.so.0.0.1 00:01:49.864 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:49.864 [26/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:49.864 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:49.864 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:50.123 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:50.123 [30/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:50.123 [31/37] Linking target test/unit_tests 00:01:50.123 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:50.123 [33/37] Linking target samples/server 00:01:50.123 [34/37] Linking target samples/gpio-pci-idio-16 00:01:50.123 [35/37] Linking target samples/lspci 00:01:50.123 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:50.123 [37/37] Linking target samples/null 00:01:50.123 INFO: autodetecting backend as ninja 00:01:50.123 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:50.123 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:50.690 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:50.690 ninja: no work to do. 00:01:58.805 The Meson build system 00:01:58.805 Version: 1.3.1 00:01:58.805 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:58.805 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:58.805 Build type: native build 00:01:58.805 Program cat found: YES (/usr/bin/cat) 00:01:58.805 Project name: DPDK 00:01:58.805 Project version: 24.03.0 00:01:58.805 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:58.805 C linker for the host machine: cc ld.bfd 2.39-16 00:01:58.805 Host machine cpu family: x86_64 00:01:58.805 Host machine cpu: x86_64 00:01:58.805 Message: ## Building in Developer Mode ## 00:01:58.805 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.805 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:58.805 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.805 Program python3 found: YES (/usr/bin/python3) 00:01:58.805 Program cat found: YES (/usr/bin/cat) 00:01:58.805 Compiler for C supports arguments -march=native: YES 00:01:58.805 Checking for size of "void *" : 8 00:01:58.805 Checking for size of "void *" : 8 (cached) 00:01:58.805 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:58.805 Library m found: YES 00:01:58.805 Library numa found: YES 00:01:58.805 Has header "numaif.h" : YES 00:01:58.805 Library fdt found: NO 00:01:58.805 Library execinfo found: NO 00:01:58.805 Has header "execinfo.h" : YES 00:01:58.805 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:58.805 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.805 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.805 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.805 Run-time dependency openssl found: YES 3.0.9 00:01:58.805 Run-time dependency libpcap found: YES 1.10.4 00:01:58.805 Has header "pcap.h" with dependency libpcap: YES 00:01:58.805 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.805 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.805 Compiler for C supports arguments -Wformat: YES 00:01:58.805 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.805 Compiler for C supports arguments -Wformat-security: NO 00:01:58.805 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.805 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.805 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.805 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.805 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.805 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.805 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.805 Compiler for C supports arguments -Wundef: YES 00:01:58.805 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.805 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.805 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.805 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.805 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.805 Program objdump found: YES (/usr/bin/objdump) 00:01:58.805 Compiler for C supports arguments -mavx512f: YES 00:01:58.805 Checking if "AVX512 checking" compiles: YES 00:01:58.805 Fetching value of define "__SSE4_2__" : 1 00:01:58.805 Fetching value of define "__AES__" : 1 00:01:58.805 Fetching value of define "__AVX__" : 1 00:01:58.805 Fetching value of define "__AVX2__" : 1 00:01:58.805 Fetching value of define "__AVX512BW__" : (undefined) 00:01:58.805 Fetching value of define "__AVX512CD__" : (undefined) 00:01:58.805 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:58.805 Fetching value of define "__AVX512F__" : (undefined) 00:01:58.805 Fetching value of define "__AVX512VL__" : (undefined) 00:01:58.805 Fetching value of define "__PCLMUL__" : 1 00:01:58.805 Fetching value of define "__RDRND__" : 1 00:01:58.805 Fetching value of define "__RDSEED__" : 1 00:01:58.805 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:58.805 Fetching value of define "__znver1__" : (undefined) 00:01:58.805 Fetching value of define "__znver2__" : (undefined) 00:01:58.805 Fetching value of define "__znver3__" : (undefined) 00:01:58.805 Fetching value of define "__znver4__" : (undefined) 00:01:58.805 Library asan found: YES 00:01:58.805 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.805 Message: lib/log: Defining dependency "log" 00:01:58.805 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.805 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.805 Library rt found: YES 00:01:58.805 Checking for function "getentropy" : NO 00:01:58.805 Message: lib/eal: Defining dependency "eal" 00:01:58.805 Message: lib/ring: Defining dependency "ring" 00:01:58.805 Message: lib/rcu: Defining dependency "rcu" 00:01:58.805 Message: lib/mempool: Defining dependency "mempool" 00:01:58.805 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.805 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.805 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:58.805 Compiler for C supports arguments -mpclmul: YES 00:01:58.805 Compiler for C supports arguments -maes: YES 00:01:58.805 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.805 Compiler for C supports arguments -mavx512bw: YES 00:01:58.805 Compiler for C supports arguments -mavx512dq: YES 00:01:58.805 Compiler for C supports arguments -mavx512vl: YES 00:01:58.805 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.805 Compiler for C supports arguments -mavx2: YES 00:01:58.805 Compiler for C supports arguments -mavx: YES 00:01:58.805 Message: lib/net: Defining dependency "net" 00:01:58.805 Message: lib/meter: Defining dependency "meter" 00:01:58.805 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.805 Message: lib/pci: Defining dependency "pci" 00:01:58.805 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.805 Message: lib/hash: Defining dependency "hash" 00:01:58.805 Message: lib/timer: Defining dependency "timer" 00:01:58.805 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.805 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.805 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.805 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.805 Message: lib/power: Defining dependency "power" 00:01:58.805 Message: lib/reorder: Defining dependency "reorder" 00:01:58.805 Message: lib/security: Defining dependency "security" 00:01:58.805 Has header "linux/userfaultfd.h" : YES 00:01:58.805 Has header "linux/vduse.h" : YES 00:01:58.805 Message: lib/vhost: Defining dependency "vhost" 00:01:58.805 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.805 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.805 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.805 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.805 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:58.805 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:58.805 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:58.805 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:58.805 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:58.805 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:58.805 Program doxygen found: YES (/usr/bin/doxygen) 00:01:58.805 Configuring doxy-api-html.conf using configuration 00:01:58.805 Configuring doxy-api-man.conf using configuration 00:01:58.805 Program mandb found: YES (/usr/bin/mandb) 00:01:58.805 Program sphinx-build found: NO 00:01:58.805 Configuring rte_build_config.h using configuration 00:01:58.805 Message: 00:01:58.806 ================= 00:01:58.806 Applications Enabled 00:01:58.806 ================= 00:01:58.806 00:01:58.806 apps: 00:01:58.806 00:01:58.806 00:01:58.806 Message: 00:01:58.806 ================= 00:01:58.806 Libraries Enabled 00:01:58.806 ================= 00:01:58.806 00:01:58.806 libs: 00:01:58.806 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:58.806 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:58.806 cryptodev, dmadev, power, reorder, security, vhost, 00:01:58.806 00:01:58.806 Message: 00:01:58.806 =============== 00:01:58.806 Drivers Enabled 00:01:58.806 =============== 00:01:58.806 00:01:58.806 common: 00:01:58.806 00:01:58.806 bus: 00:01:58.806 pci, vdev, 00:01:58.806 mempool: 00:01:58.806 ring, 00:01:58.806 dma: 00:01:58.806 00:01:58.806 net: 00:01:58.806 00:01:58.806 crypto: 00:01:58.806 00:01:58.806 compress: 00:01:58.806 00:01:58.806 vdpa: 00:01:58.806 00:01:58.806 00:01:58.806 Message: 00:01:58.806 ================= 00:01:58.806 Content Skipped 00:01:58.806 ================= 00:01:58.806 00:01:58.806 apps: 00:01:58.806 dumpcap: explicitly disabled via build config 00:01:58.806 graph: explicitly disabled via build config 00:01:58.806 pdump: explicitly disabled via build config 00:01:58.806 proc-info: explicitly disabled via build config 00:01:58.806 test-acl: explicitly disabled via build config 00:01:58.806 test-bbdev: explicitly disabled via build config 00:01:58.806 test-cmdline: explicitly disabled via build config 00:01:58.806 test-compress-perf: explicitly disabled via build config 00:01:58.806 test-crypto-perf: explicitly disabled via build config 00:01:58.806 test-dma-perf: explicitly disabled via build config 00:01:58.806 test-eventdev: explicitly disabled via build config 00:01:58.806 test-fib: explicitly disabled via build config 00:01:58.806 test-flow-perf: explicitly disabled via build config 00:01:58.806 test-gpudev: explicitly disabled via build config 00:01:58.806 test-mldev: explicitly disabled via build config 00:01:58.806 test-pipeline: explicitly disabled via build config 00:01:58.806 test-pmd: explicitly disabled via build config 00:01:58.806 test-regex: explicitly disabled via build config 00:01:58.806 test-sad: explicitly disabled via build config 00:01:58.806 test-security-perf: explicitly disabled via build config 00:01:58.806 00:01:58.806 libs: 00:01:58.806 argparse: explicitly disabled via build config 00:01:58.806 metrics: explicitly disabled via build config 00:01:58.806 acl: explicitly disabled via build config 00:01:58.806 bbdev: explicitly disabled via build config 00:01:58.806 bitratestats: explicitly disabled via build config 00:01:58.806 bpf: explicitly disabled via build config 00:01:58.806 cfgfile: explicitly disabled via build config 00:01:58.806 distributor: explicitly disabled via build config 00:01:58.806 efd: explicitly disabled via build config 00:01:58.806 eventdev: explicitly disabled via build config 00:01:58.806 dispatcher: explicitly disabled via build config 00:01:58.806 gpudev: explicitly disabled via build config 00:01:58.806 gro: explicitly disabled via build config 00:01:58.806 gso: explicitly disabled via build config 00:01:58.806 ip_frag: explicitly disabled via build config 00:01:58.806 jobstats: explicitly disabled via build config 00:01:58.806 latencystats: explicitly disabled via build config 00:01:58.806 lpm: explicitly disabled via build config 00:01:58.806 member: explicitly disabled via build config 00:01:58.806 pcapng: explicitly disabled via build config 00:01:58.806 rawdev: explicitly disabled via build config 00:01:58.806 regexdev: explicitly disabled via build config 00:01:58.806 mldev: explicitly disabled via build config 00:01:58.806 rib: explicitly disabled via build config 00:01:58.806 sched: explicitly disabled via build config 00:01:58.806 stack: explicitly disabled via build config 00:01:58.806 ipsec: explicitly disabled via build config 00:01:58.806 pdcp: explicitly disabled via build config 00:01:58.806 fib: explicitly disabled via build config 00:01:58.806 port: explicitly disabled via build config 00:01:58.806 pdump: explicitly disabled via build config 00:01:58.806 table: explicitly disabled via build config 00:01:58.806 pipeline: explicitly disabled via build config 00:01:58.806 graph: explicitly disabled via build config 00:01:58.806 node: explicitly disabled via build config 00:01:58.806 00:01:58.806 drivers: 00:01:58.806 common/cpt: not in enabled drivers build config 00:01:58.806 common/dpaax: not in enabled drivers build config 00:01:58.806 common/iavf: not in enabled drivers build config 00:01:58.806 common/idpf: not in enabled drivers build config 00:01:58.806 common/ionic: not in enabled drivers build config 00:01:58.806 common/mvep: not in enabled drivers build config 00:01:58.806 common/octeontx: not in enabled drivers build config 00:01:58.806 bus/auxiliary: not in enabled drivers build config 00:01:58.806 bus/cdx: not in enabled drivers build config 00:01:58.806 bus/dpaa: not in enabled drivers build config 00:01:58.806 bus/fslmc: not in enabled drivers build config 00:01:58.806 bus/ifpga: not in enabled drivers build config 00:01:58.806 bus/platform: not in enabled drivers build config 00:01:58.806 bus/uacce: not in enabled drivers build config 00:01:58.806 bus/vmbus: not in enabled drivers build config 00:01:58.806 common/cnxk: not in enabled drivers build config 00:01:58.806 common/mlx5: not in enabled drivers build config 00:01:58.806 common/nfp: not in enabled drivers build config 00:01:58.806 common/nitrox: not in enabled drivers build config 00:01:58.806 common/qat: not in enabled drivers build config 00:01:58.806 common/sfc_efx: not in enabled drivers build config 00:01:58.806 mempool/bucket: not in enabled drivers build config 00:01:58.806 mempool/cnxk: not in enabled drivers build config 00:01:58.806 mempool/dpaa: not in enabled drivers build config 00:01:58.806 mempool/dpaa2: not in enabled drivers build config 00:01:58.806 mempool/octeontx: not in enabled drivers build config 00:01:58.806 mempool/stack: not in enabled drivers build config 00:01:58.806 dma/cnxk: not in enabled drivers build config 00:01:58.806 dma/dpaa: not in enabled drivers build config 00:01:58.806 dma/dpaa2: not in enabled drivers build config 00:01:58.806 dma/hisilicon: not in enabled drivers build config 00:01:58.806 dma/idxd: not in enabled drivers build config 00:01:58.806 dma/ioat: not in enabled drivers build config 00:01:58.806 dma/skeleton: not in enabled drivers build config 00:01:58.806 net/af_packet: not in enabled drivers build config 00:01:58.806 net/af_xdp: not in enabled drivers build config 00:01:58.806 net/ark: not in enabled drivers build config 00:01:58.806 net/atlantic: not in enabled drivers build config 00:01:58.806 net/avp: not in enabled drivers build config 00:01:58.806 net/axgbe: not in enabled drivers build config 00:01:58.806 net/bnx2x: not in enabled drivers build config 00:01:58.806 net/bnxt: not in enabled drivers build config 00:01:58.806 net/bonding: not in enabled drivers build config 00:01:58.806 net/cnxk: not in enabled drivers build config 00:01:58.806 net/cpfl: not in enabled drivers build config 00:01:58.806 net/cxgbe: not in enabled drivers build config 00:01:58.806 net/dpaa: not in enabled drivers build config 00:01:58.806 net/dpaa2: not in enabled drivers build config 00:01:58.806 net/e1000: not in enabled drivers build config 00:01:58.806 net/ena: not in enabled drivers build config 00:01:58.806 net/enetc: not in enabled drivers build config 00:01:58.806 net/enetfec: not in enabled drivers build config 00:01:58.806 net/enic: not in enabled drivers build config 00:01:58.806 net/failsafe: not in enabled drivers build config 00:01:58.806 net/fm10k: not in enabled drivers build config 00:01:58.806 net/gve: not in enabled drivers build config 00:01:58.806 net/hinic: not in enabled drivers build config 00:01:58.806 net/hns3: not in enabled drivers build config 00:01:58.806 net/i40e: not in enabled drivers build config 00:01:58.806 net/iavf: not in enabled drivers build config 00:01:58.806 net/ice: not in enabled drivers build config 00:01:58.806 net/idpf: not in enabled drivers build config 00:01:58.806 net/igc: not in enabled drivers build config 00:01:58.806 net/ionic: not in enabled drivers build config 00:01:58.806 net/ipn3ke: not in enabled drivers build config 00:01:58.806 net/ixgbe: not in enabled drivers build config 00:01:58.806 net/mana: not in enabled drivers build config 00:01:58.806 net/memif: not in enabled drivers build config 00:01:58.806 net/mlx4: not in enabled drivers build config 00:01:58.806 net/mlx5: not in enabled drivers build config 00:01:58.806 net/mvneta: not in enabled drivers build config 00:01:58.806 net/mvpp2: not in enabled drivers build config 00:01:58.806 net/netvsc: not in enabled drivers build config 00:01:58.806 net/nfb: not in enabled drivers build config 00:01:58.806 net/nfp: not in enabled drivers build config 00:01:58.806 net/ngbe: not in enabled drivers build config 00:01:58.806 net/null: not in enabled drivers build config 00:01:58.806 net/octeontx: not in enabled drivers build config 00:01:58.806 net/octeon_ep: not in enabled drivers build config 00:01:58.806 net/pcap: not in enabled drivers build config 00:01:58.806 net/pfe: not in enabled drivers build config 00:01:58.806 net/qede: not in enabled drivers build config 00:01:58.806 net/ring: not in enabled drivers build config 00:01:58.806 net/sfc: not in enabled drivers build config 00:01:58.806 net/softnic: not in enabled drivers build config 00:01:58.806 net/tap: not in enabled drivers build config 00:01:58.806 net/thunderx: not in enabled drivers build config 00:01:58.806 net/txgbe: not in enabled drivers build config 00:01:58.806 net/vdev_netvsc: not in enabled drivers build config 00:01:58.806 net/vhost: not in enabled drivers build config 00:01:58.806 net/virtio: not in enabled drivers build config 00:01:58.806 net/vmxnet3: not in enabled drivers build config 00:01:58.806 raw/*: missing internal dependency, "rawdev" 00:01:58.806 crypto/armv8: not in enabled drivers build config 00:01:58.806 crypto/bcmfs: not in enabled drivers build config 00:01:58.806 crypto/caam_jr: not in enabled drivers build config 00:01:58.806 crypto/ccp: not in enabled drivers build config 00:01:58.806 crypto/cnxk: not in enabled drivers build config 00:01:58.806 crypto/dpaa_sec: not in enabled drivers build config 00:01:58.806 crypto/dpaa2_sec: not in enabled drivers build config 00:01:58.806 crypto/ipsec_mb: not in enabled drivers build config 00:01:58.806 crypto/mlx5: not in enabled drivers build config 00:01:58.806 crypto/mvsam: not in enabled drivers build config 00:01:58.806 crypto/nitrox: not in enabled drivers build config 00:01:58.806 crypto/null: not in enabled drivers build config 00:01:58.806 crypto/octeontx: not in enabled drivers build config 00:01:58.806 crypto/openssl: not in enabled drivers build config 00:01:58.806 crypto/scheduler: not in enabled drivers build config 00:01:58.806 crypto/uadk: not in enabled drivers build config 00:01:58.807 crypto/virtio: not in enabled drivers build config 00:01:58.807 compress/isal: not in enabled drivers build config 00:01:58.807 compress/mlx5: not in enabled drivers build config 00:01:58.807 compress/nitrox: not in enabled drivers build config 00:01:58.807 compress/octeontx: not in enabled drivers build config 00:01:58.807 compress/zlib: not in enabled drivers build config 00:01:58.807 regex/*: missing internal dependency, "regexdev" 00:01:58.807 ml/*: missing internal dependency, "mldev" 00:01:58.807 vdpa/ifc: not in enabled drivers build config 00:01:58.807 vdpa/mlx5: not in enabled drivers build config 00:01:58.807 vdpa/nfp: not in enabled drivers build config 00:01:58.807 vdpa/sfc: not in enabled drivers build config 00:01:58.807 event/*: missing internal dependency, "eventdev" 00:01:58.807 baseband/*: missing internal dependency, "bbdev" 00:01:58.807 gpu/*: missing internal dependency, "gpudev" 00:01:58.807 00:01:58.807 00:01:59.065 Build targets in project: 85 00:01:59.065 00:01:59.065 DPDK 24.03.0 00:01:59.065 00:01:59.065 User defined options 00:01:59.065 buildtype : debug 00:01:59.065 default_library : shared 00:01:59.065 libdir : lib 00:01:59.065 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.065 b_sanitize : address 00:01:59.065 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.065 c_link_args : 00:01:59.065 cpu_instruction_set: native 00:01:59.065 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:59.065 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:59.065 enable_docs : false 00:01:59.065 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.065 enable_kmods : false 00:01:59.065 max_lcores : 128 00:01:59.065 tests : false 00:01:59.065 00:01:59.065 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.644 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:59.644 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.644 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.644 [3/268] Linking static target lib/librte_kvargs.a 00:01:59.910 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:59.910 [5/268] Linking static target lib/librte_log.a 00:01:59.910 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.168 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.168 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.168 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.427 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.685 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.685 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.685 [13/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.685 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.685 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.685 [16/268] Linking target lib/librte_log.so.24.1 00:02:00.685 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.685 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.685 [19/268] Linking static target lib/librte_telemetry.a 00:02:00.943 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.943 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:00.943 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.202 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.202 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.202 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.460 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.460 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.460 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.460 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.719 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.719 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.719 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.719 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:01.719 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.977 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.977 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:01.977 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.977 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.235 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.235 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.235 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.494 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.494 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.494 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.753 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.753 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.753 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.753 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.011 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.011 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.011 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.270 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.529 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.529 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.529 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.529 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.529 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.787 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.787 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.787 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.787 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.045 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.045 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.045 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:04.303 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.303 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.562 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:04.562 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.821 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.821 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.821 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.079 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.079 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.079 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.079 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.079 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.079 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:05.338 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.338 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:05.597 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.597 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.597 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:05.855 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:05.855 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:05.855 [85/268] Linking static target lib/librte_ring.a 00:02:06.114 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.114 [87/268] Linking static target lib/librte_eal.a 00:02:06.114 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.372 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.631 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.631 [91/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.631 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.631 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.631 [94/268] Linking static target lib/librte_rcu.a 00:02:06.631 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.890 [96/268] Linking static target lib/librte_mempool.a 00:02:06.890 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.890 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.148 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.148 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.148 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.406 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.406 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.406 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.663 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.663 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.663 [107/268] Linking static target lib/librte_net.a 00:02:07.663 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.663 [109/268] Linking static target lib/librte_meter.a 00:02:07.920 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.920 [111/268] Linking static target lib/librte_mbuf.a 00:02:07.920 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.920 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.177 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.177 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.177 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.177 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.435 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.693 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.693 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.951 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.951 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.209 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.209 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.468 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.469 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.469 [127/268] Linking static target lib/librte_pci.a 00:02:09.469 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.469 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.728 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.728 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.728 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.728 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.728 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.728 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.728 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.987 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.987 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.987 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.988 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.988 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.988 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.988 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.988 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:10.556 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.556 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.556 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.556 [148/268] Linking static target lib/librte_cmdline.a 00:02:10.556 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.815 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.815 [151/268] Linking static target lib/librte_timer.a 00:02:10.815 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:11.074 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:11.074 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:11.333 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.333 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:11.333 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.333 [158/268] Linking static target lib/librte_ethdev.a 00:02:11.592 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.592 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.592 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.592 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:11.592 [163/268] Linking static target lib/librte_compressdev.a 00:02:11.592 [164/268] Linking static target lib/librte_hash.a 00:02:11.851 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:12.110 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.110 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:12.110 [168/268] Linking static target lib/librte_dmadev.a 00:02:12.110 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:12.370 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.370 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.370 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:12.370 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.629 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.888 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.888 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.888 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.888 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.888 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.146 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.146 [181/268] Linking static target lib/librte_cryptodev.a 00:02:13.146 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:13.146 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:13.146 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:13.405 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.664 [186/268] Linking static target lib/librte_power.a 00:02:13.664 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.664 [188/268] Linking static target lib/librte_reorder.a 00:02:13.664 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.664 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.923 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.923 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.923 [193/268] Linking static target lib/librte_security.a 00:02:14.181 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.181 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:14.439 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.439 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.698 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.698 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.956 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:14.956 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.956 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.956 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.956 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:15.215 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:15.215 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.473 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.473 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.473 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.473 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.473 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.732 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.732 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.732 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.732 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.732 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:15.732 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.732 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.732 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:15.991 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.991 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.991 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.991 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.991 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.991 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.250 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:16.509 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.768 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.768 [229/268] Linking target lib/librte_eal.so.24.1 00:02:17.026 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:17.026 [231/268] Linking target lib/librte_ring.so.24.1 00:02:17.026 [232/268] Linking target lib/librte_pci.so.24.1 00:02:17.026 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:17.026 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:17.026 [235/268] Linking target lib/librte_meter.so.24.1 00:02:17.026 [236/268] Linking target lib/librte_timer.so.24.1 00:02:17.026 [237/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.285 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:17.285 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:17.285 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:17.285 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:17.285 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.285 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:17.285 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:17.285 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:17.285 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:17.544 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:17.544 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:17.544 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:17.544 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:17.801 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:17.801 [252/268] Linking target lib/librte_net.so.24.1 00:02:17.801 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:17.801 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:17.801 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:17.801 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:17.801 [257/268] Linking target lib/librte_security.so.24.1 00:02:17.801 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:17.801 [259/268] Linking target lib/librte_hash.so.24.1 00:02:18.060 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.627 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.627 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:18.886 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:18.886 [264/268] Linking target lib/librte_power.so.24.1 00:02:20.790 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:20.790 [266/268] Linking static target lib/librte_vhost.a 00:02:22.693 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.694 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:22.694 INFO: autodetecting backend as ninja 00:02:22.694 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:23.631 CC lib/log/log.o 00:02:23.631 CC lib/log/log_flags.o 00:02:23.631 CC lib/log/log_deprecated.o 00:02:23.631 CC lib/ut_mock/mock.o 00:02:23.631 CC lib/ut/ut.o 00:02:23.890 LIB libspdk_log.a 00:02:23.890 LIB libspdk_ut_mock.a 00:02:23.890 LIB libspdk_ut.a 00:02:23.890 SO libspdk_ut_mock.so.6.0 00:02:23.890 SO libspdk_ut.so.2.0 00:02:23.890 SO libspdk_log.so.7.0 00:02:23.890 SYMLINK libspdk_ut_mock.so 00:02:23.890 SYMLINK libspdk_ut.so 00:02:23.890 SYMLINK libspdk_log.so 00:02:24.148 CC lib/dma/dma.o 00:02:24.148 CXX lib/trace_parser/trace.o 00:02:24.148 CC lib/util/bit_array.o 00:02:24.148 CC lib/util/base64.o 00:02:24.148 CC lib/util/cpuset.o 00:02:24.148 CC lib/util/crc16.o 00:02:24.148 CC lib/ioat/ioat.o 00:02:24.148 CC lib/util/crc32.o 00:02:24.148 CC lib/util/crc32c.o 00:02:24.406 CC lib/vfio_user/host/vfio_user_pci.o 00:02:24.406 CC lib/vfio_user/host/vfio_user.o 00:02:24.406 CC lib/util/crc32_ieee.o 00:02:24.406 CC lib/util/crc64.o 00:02:24.406 LIB libspdk_dma.a 00:02:24.406 CC lib/util/dif.o 00:02:24.406 SO libspdk_dma.so.4.0 00:02:24.406 CC lib/util/fd.o 00:02:24.406 CC lib/util/file.o 00:02:24.406 SYMLINK libspdk_dma.so 00:02:24.406 CC lib/util/hexlify.o 00:02:24.406 CC lib/util/iov.o 00:02:24.663 CC lib/util/math.o 00:02:24.663 LIB libspdk_ioat.a 00:02:24.663 CC lib/util/pipe.o 00:02:24.663 SO libspdk_ioat.so.7.0 00:02:24.663 CC lib/util/strerror_tls.o 00:02:24.663 LIB libspdk_vfio_user.a 00:02:24.663 CC lib/util/string.o 00:02:24.663 SYMLINK libspdk_ioat.so 00:02:24.663 CC lib/util/uuid.o 00:02:24.663 SO libspdk_vfio_user.so.5.0 00:02:24.663 CC lib/util/fd_group.o 00:02:24.663 CC lib/util/xor.o 00:02:24.663 SYMLINK libspdk_vfio_user.so 00:02:24.663 CC lib/util/zipf.o 00:02:25.228 LIB libspdk_util.a 00:02:25.228 SO libspdk_util.so.9.1 00:02:25.486 LIB libspdk_trace_parser.a 00:02:25.486 SYMLINK libspdk_util.so 00:02:25.486 SO libspdk_trace_parser.so.5.0 00:02:25.486 SYMLINK libspdk_trace_parser.so 00:02:25.486 CC lib/rdma_utils/rdma_utils.o 00:02:25.486 CC lib/vmd/vmd.o 00:02:25.486 CC lib/vmd/led.o 00:02:25.486 CC lib/rdma_provider/common.o 00:02:25.486 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:25.486 CC lib/conf/conf.o 00:02:25.486 CC lib/json/json_parse.o 00:02:25.486 CC lib/json/json_util.o 00:02:25.486 CC lib/idxd/idxd.o 00:02:25.486 CC lib/env_dpdk/env.o 00:02:25.744 CC lib/idxd/idxd_user.o 00:02:25.744 CC lib/idxd/idxd_kernel.o 00:02:25.744 LIB libspdk_rdma_provider.a 00:02:25.744 SO libspdk_rdma_provider.so.6.0 00:02:25.744 LIB libspdk_conf.a 00:02:25.744 CC lib/json/json_write.o 00:02:25.744 SO libspdk_conf.so.6.0 00:02:26.002 LIB libspdk_rdma_utils.a 00:02:26.002 CC lib/env_dpdk/memory.o 00:02:26.002 SYMLINK libspdk_rdma_provider.so 00:02:26.002 CC lib/env_dpdk/pci.o 00:02:26.002 SO libspdk_rdma_utils.so.1.0 00:02:26.002 SYMLINK libspdk_conf.so 00:02:26.002 CC lib/env_dpdk/init.o 00:02:26.002 CC lib/env_dpdk/threads.o 00:02:26.002 SYMLINK libspdk_rdma_utils.so 00:02:26.002 CC lib/env_dpdk/pci_ioat.o 00:02:26.002 CC lib/env_dpdk/pci_virtio.o 00:02:26.002 CC lib/env_dpdk/pci_vmd.o 00:02:26.002 CC lib/env_dpdk/pci_idxd.o 00:02:26.260 CC lib/env_dpdk/pci_event.o 00:02:26.260 LIB libspdk_json.a 00:02:26.260 SO libspdk_json.so.6.0 00:02:26.260 CC lib/env_dpdk/sigbus_handler.o 00:02:26.260 CC lib/env_dpdk/pci_dpdk.o 00:02:26.260 SYMLINK libspdk_json.so 00:02:26.260 LIB libspdk_idxd.a 00:02:26.260 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:26.260 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:26.260 SO libspdk_idxd.so.12.0 00:02:26.518 SYMLINK libspdk_idxd.so 00:02:26.518 LIB libspdk_vmd.a 00:02:26.518 CC lib/jsonrpc/jsonrpc_server.o 00:02:26.518 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:26.518 CC lib/jsonrpc/jsonrpc_client.o 00:02:26.518 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:26.518 SO libspdk_vmd.so.6.0 00:02:26.518 SYMLINK libspdk_vmd.so 00:02:26.777 LIB libspdk_jsonrpc.a 00:02:26.777 SO libspdk_jsonrpc.so.6.0 00:02:27.038 SYMLINK libspdk_jsonrpc.so 00:02:27.340 CC lib/rpc/rpc.o 00:02:27.340 LIB libspdk_env_dpdk.a 00:02:27.609 LIB libspdk_rpc.a 00:02:27.609 SO libspdk_rpc.so.6.0 00:02:27.609 SO libspdk_env_dpdk.so.14.1 00:02:27.609 SYMLINK libspdk_rpc.so 00:02:27.609 SYMLINK libspdk_env_dpdk.so 00:02:27.877 CC lib/notify/notify.o 00:02:27.877 CC lib/notify/notify_rpc.o 00:02:27.877 CC lib/keyring/keyring_rpc.o 00:02:27.877 CC lib/keyring/keyring.o 00:02:27.877 CC lib/trace/trace.o 00:02:27.877 CC lib/trace/trace_flags.o 00:02:27.877 CC lib/trace/trace_rpc.o 00:02:28.136 LIB libspdk_notify.a 00:02:28.136 SO libspdk_notify.so.6.0 00:02:28.136 LIB libspdk_keyring.a 00:02:28.136 LIB libspdk_trace.a 00:02:28.136 SYMLINK libspdk_notify.so 00:02:28.136 SO libspdk_keyring.so.1.0 00:02:28.136 SO libspdk_trace.so.10.0 00:02:28.136 SYMLINK libspdk_trace.so 00:02:28.136 SYMLINK libspdk_keyring.so 00:02:28.394 CC lib/sock/sock.o 00:02:28.394 CC lib/sock/sock_rpc.o 00:02:28.394 CC lib/thread/thread.o 00:02:28.394 CC lib/thread/iobuf.o 00:02:28.960 LIB libspdk_sock.a 00:02:29.218 SO libspdk_sock.so.10.0 00:02:29.218 SYMLINK libspdk_sock.so 00:02:29.475 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:29.475 CC lib/nvme/nvme_ctrlr.o 00:02:29.475 CC lib/nvme/nvme_fabric.o 00:02:29.475 CC lib/nvme/nvme_ns_cmd.o 00:02:29.475 CC lib/nvme/nvme_pcie_common.o 00:02:29.475 CC lib/nvme/nvme_ns.o 00:02:29.475 CC lib/nvme/nvme_pcie.o 00:02:29.475 CC lib/nvme/nvme.o 00:02:29.475 CC lib/nvme/nvme_qpair.o 00:02:30.408 CC lib/nvme/nvme_quirks.o 00:02:30.408 CC lib/nvme/nvme_transport.o 00:02:30.408 LIB libspdk_thread.a 00:02:30.408 SO libspdk_thread.so.10.1 00:02:30.408 CC lib/nvme/nvme_discovery.o 00:02:30.408 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:30.408 SYMLINK libspdk_thread.so 00:02:30.408 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:30.665 CC lib/nvme/nvme_tcp.o 00:02:30.665 CC lib/nvme/nvme_opal.o 00:02:30.665 CC lib/nvme/nvme_io_msg.o 00:02:30.922 CC lib/nvme/nvme_poll_group.o 00:02:30.922 CC lib/nvme/nvme_zns.o 00:02:31.180 CC lib/nvme/nvme_stubs.o 00:02:31.180 CC lib/nvme/nvme_auth.o 00:02:31.180 CC lib/nvme/nvme_cuse.o 00:02:31.180 CC lib/nvme/nvme_vfio_user.o 00:02:31.180 CC lib/nvme/nvme_rdma.o 00:02:31.438 CC lib/accel/accel.o 00:02:31.696 CC lib/accel/accel_rpc.o 00:02:31.696 CC lib/blob/blobstore.o 00:02:31.696 CC lib/init/json_config.o 00:02:31.954 CC lib/init/subsystem.o 00:02:31.954 CC lib/init/subsystem_rpc.o 00:02:31.954 CC lib/init/rpc.o 00:02:31.954 CC lib/accel/accel_sw.o 00:02:32.213 CC lib/blob/request.o 00:02:32.213 LIB libspdk_init.a 00:02:32.213 CC lib/virtio/virtio.o 00:02:32.213 SO libspdk_init.so.5.0 00:02:32.213 CC lib/virtio/virtio_vhost_user.o 00:02:32.471 CC lib/vfu_tgt/tgt_endpoint.o 00:02:32.471 SYMLINK libspdk_init.so 00:02:32.471 CC lib/vfu_tgt/tgt_rpc.o 00:02:32.471 CC lib/virtio/virtio_vfio_user.o 00:02:32.471 CC lib/virtio/virtio_pci.o 00:02:32.472 CC lib/blob/zeroes.o 00:02:32.730 CC lib/blob/blob_bs_dev.o 00:02:32.730 LIB libspdk_accel.a 00:02:32.730 CC lib/event/reactor.o 00:02:32.730 CC lib/event/app.o 00:02:32.730 LIB libspdk_vfu_tgt.a 00:02:32.730 CC lib/event/log_rpc.o 00:02:32.730 SO libspdk_vfu_tgt.so.3.0 00:02:32.730 CC lib/event/app_rpc.o 00:02:32.730 SO libspdk_accel.so.15.1 00:02:32.730 SYMLINK libspdk_vfu_tgt.so 00:02:32.730 CC lib/event/scheduler_static.o 00:02:32.730 SYMLINK libspdk_accel.so 00:02:32.988 LIB libspdk_virtio.a 00:02:32.989 LIB libspdk_nvme.a 00:02:32.989 SO libspdk_virtio.so.7.0 00:02:32.989 SYMLINK libspdk_virtio.so 00:02:32.989 CC lib/bdev/bdev_rpc.o 00:02:32.989 CC lib/bdev/bdev.o 00:02:32.989 CC lib/bdev/bdev_zone.o 00:02:32.989 CC lib/bdev/part.o 00:02:32.989 CC lib/bdev/scsi_nvme.o 00:02:33.246 SO libspdk_nvme.so.13.1 00:02:33.246 LIB libspdk_event.a 00:02:33.246 SO libspdk_event.so.14.0 00:02:33.504 SYMLINK libspdk_event.so 00:02:33.504 SYMLINK libspdk_nvme.so 00:02:36.037 LIB libspdk_blob.a 00:02:36.037 SO libspdk_blob.so.11.0 00:02:36.037 SYMLINK libspdk_blob.so 00:02:36.295 CC lib/lvol/lvol.o 00:02:36.295 CC lib/blobfs/tree.o 00:02:36.295 CC lib/blobfs/blobfs.o 00:02:36.554 LIB libspdk_bdev.a 00:02:36.554 SO libspdk_bdev.so.15.1 00:02:36.812 SYMLINK libspdk_bdev.so 00:02:37.069 CC lib/ublk/ublk.o 00:02:37.070 CC lib/ftl/ftl_core.o 00:02:37.070 CC lib/ublk/ublk_rpc.o 00:02:37.070 CC lib/ftl/ftl_layout.o 00:02:37.070 CC lib/ftl/ftl_init.o 00:02:37.070 CC lib/nbd/nbd.o 00:02:37.070 CC lib/nvmf/ctrlr.o 00:02:37.070 CC lib/scsi/dev.o 00:02:37.327 CC lib/nbd/nbd_rpc.o 00:02:37.327 CC lib/ftl/ftl_debug.o 00:02:37.327 CC lib/scsi/lun.o 00:02:37.327 CC lib/ftl/ftl_io.o 00:02:37.327 CC lib/ftl/ftl_sb.o 00:02:37.327 CC lib/nvmf/ctrlr_discovery.o 00:02:37.585 LIB libspdk_blobfs.a 00:02:37.585 LIB libspdk_nbd.a 00:02:37.585 SO libspdk_blobfs.so.10.0 00:02:37.585 SO libspdk_nbd.so.7.0 00:02:37.585 LIB libspdk_lvol.a 00:02:37.585 CC lib/scsi/port.o 00:02:37.585 SO libspdk_lvol.so.10.0 00:02:37.585 SYMLINK libspdk_blobfs.so 00:02:37.585 CC lib/nvmf/ctrlr_bdev.o 00:02:37.585 SYMLINK libspdk_nbd.so 00:02:37.585 CC lib/nvmf/subsystem.o 00:02:37.585 SYMLINK libspdk_lvol.so 00:02:37.585 CC lib/nvmf/nvmf.o 00:02:37.841 CC lib/nvmf/nvmf_rpc.o 00:02:37.841 CC lib/ftl/ftl_l2p.o 00:02:37.841 CC lib/scsi/scsi.o 00:02:37.842 CC lib/nvmf/transport.o 00:02:37.842 LIB libspdk_ublk.a 00:02:37.842 SO libspdk_ublk.so.3.0 00:02:37.842 CC lib/scsi/scsi_bdev.o 00:02:37.842 SYMLINK libspdk_ublk.so 00:02:37.842 CC lib/ftl/ftl_l2p_flat.o 00:02:38.098 CC lib/ftl/ftl_nv_cache.o 00:02:38.098 CC lib/nvmf/tcp.o 00:02:38.356 CC lib/nvmf/stubs.o 00:02:38.613 CC lib/scsi/scsi_pr.o 00:02:38.613 CC lib/nvmf/mdns_server.o 00:02:38.613 CC lib/nvmf/vfio_user.o 00:02:38.869 CC lib/nvmf/rdma.o 00:02:38.869 CC lib/nvmf/auth.o 00:02:38.869 CC lib/scsi/scsi_rpc.o 00:02:38.869 CC lib/ftl/ftl_band.o 00:02:39.126 CC lib/scsi/task.o 00:02:39.126 CC lib/ftl/ftl_band_ops.o 00:02:39.126 CC lib/ftl/ftl_writer.o 00:02:39.384 CC lib/ftl/ftl_rq.o 00:02:39.384 LIB libspdk_scsi.a 00:02:39.384 SO libspdk_scsi.so.9.0 00:02:39.384 CC lib/ftl/ftl_reloc.o 00:02:39.384 CC lib/ftl/ftl_l2p_cache.o 00:02:39.384 CC lib/ftl/ftl_p2l.o 00:02:39.384 SYMLINK libspdk_scsi.so 00:02:39.384 CC lib/ftl/mngt/ftl_mngt.o 00:02:39.642 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:39.642 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:39.642 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:39.900 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:39.900 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:39.900 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:39.900 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:39.900 CC lib/iscsi/conn.o 00:02:39.900 CC lib/iscsi/init_grp.o 00:02:40.158 CC lib/iscsi/iscsi.o 00:02:40.158 CC lib/iscsi/md5.o 00:02:40.158 CC lib/iscsi/param.o 00:02:40.158 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:40.158 CC lib/iscsi/portal_grp.o 00:02:40.158 CC lib/iscsi/tgt_node.o 00:02:40.416 CC lib/vhost/vhost.o 00:02:40.416 CC lib/vhost/vhost_rpc.o 00:02:40.416 CC lib/iscsi/iscsi_subsystem.o 00:02:40.416 CC lib/iscsi/iscsi_rpc.o 00:02:40.673 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:40.673 CC lib/iscsi/task.o 00:02:40.673 CC lib/vhost/vhost_scsi.o 00:02:40.673 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:40.931 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:40.931 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:40.931 CC lib/vhost/vhost_blk.o 00:02:40.931 CC lib/vhost/rte_vhost_user.o 00:02:40.931 CC lib/ftl/utils/ftl_conf.o 00:02:41.189 CC lib/ftl/utils/ftl_md.o 00:02:41.189 CC lib/ftl/utils/ftl_mempool.o 00:02:41.189 CC lib/ftl/utils/ftl_bitmap.o 00:02:41.189 CC lib/ftl/utils/ftl_property.o 00:02:41.189 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:41.189 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:41.448 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:41.448 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:41.448 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:41.448 LIB libspdk_nvmf.a 00:02:41.448 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:41.706 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:41.706 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:41.706 SO libspdk_nvmf.so.18.1 00:02:41.706 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:41.706 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:41.964 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:41.964 CC lib/ftl/base/ftl_base_dev.o 00:02:41.964 CC lib/ftl/base/ftl_base_bdev.o 00:02:41.964 LIB libspdk_iscsi.a 00:02:41.964 CC lib/ftl/ftl_trace.o 00:02:41.964 SO libspdk_iscsi.so.8.0 00:02:42.222 SYMLINK libspdk_nvmf.so 00:02:42.222 LIB libspdk_ftl.a 00:02:42.222 SYMLINK libspdk_iscsi.so 00:02:42.222 LIB libspdk_vhost.a 00:02:42.481 SO libspdk_vhost.so.8.0 00:02:42.482 SO libspdk_ftl.so.9.0 00:02:42.482 SYMLINK libspdk_vhost.so 00:02:42.741 SYMLINK libspdk_ftl.so 00:02:43.315 CC module/vfu_device/vfu_virtio.o 00:02:43.315 CC module/env_dpdk/env_dpdk_rpc.o 00:02:43.315 CC module/accel/error/accel_error.o 00:02:43.315 CC module/blob/bdev/blob_bdev.o 00:02:43.315 CC module/sock/posix/posix.o 00:02:43.315 CC module/accel/ioat/accel_ioat.o 00:02:43.315 CC module/accel/dsa/accel_dsa.o 00:02:43.315 CC module/keyring/file/keyring.o 00:02:43.315 CC module/sock/uring/uring.o 00:02:43.315 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:43.315 LIB libspdk_env_dpdk_rpc.a 00:02:43.315 SO libspdk_env_dpdk_rpc.so.6.0 00:02:43.315 SYMLINK libspdk_env_dpdk_rpc.so 00:02:43.315 CC module/keyring/file/keyring_rpc.o 00:02:43.607 CC module/accel/error/accel_error_rpc.o 00:02:43.607 CC module/accel/ioat/accel_ioat_rpc.o 00:02:43.607 LIB libspdk_scheduler_dynamic.a 00:02:43.607 SO libspdk_scheduler_dynamic.so.4.0 00:02:43.607 LIB libspdk_keyring_file.a 00:02:43.607 CC module/accel/dsa/accel_dsa_rpc.o 00:02:43.607 LIB libspdk_blob_bdev.a 00:02:43.607 SO libspdk_keyring_file.so.1.0 00:02:43.607 SO libspdk_blob_bdev.so.11.0 00:02:43.607 CC module/keyring/linux/keyring.o 00:02:43.607 LIB libspdk_accel_error.a 00:02:43.607 LIB libspdk_accel_ioat.a 00:02:43.607 SYMLINK libspdk_scheduler_dynamic.so 00:02:43.607 SO libspdk_accel_error.so.2.0 00:02:43.607 SO libspdk_accel_ioat.so.6.0 00:02:43.607 SYMLINK libspdk_keyring_file.so 00:02:43.607 SYMLINK libspdk_blob_bdev.so 00:02:43.607 CC module/keyring/linux/keyring_rpc.o 00:02:43.874 LIB libspdk_accel_dsa.a 00:02:43.874 SYMLINK libspdk_accel_error.so 00:02:43.874 SYMLINK libspdk_accel_ioat.so 00:02:43.874 CC module/vfu_device/vfu_virtio_blk.o 00:02:43.874 CC module/vfu_device/vfu_virtio_scsi.o 00:02:43.874 SO libspdk_accel_dsa.so.5.0 00:02:43.874 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:43.874 LIB libspdk_keyring_linux.a 00:02:43.874 SYMLINK libspdk_accel_dsa.so 00:02:43.874 CC module/vfu_device/vfu_virtio_rpc.o 00:02:43.874 CC module/scheduler/gscheduler/gscheduler.o 00:02:43.874 SO libspdk_keyring_linux.so.1.0 00:02:43.874 CC module/accel/iaa/accel_iaa.o 00:02:43.874 SYMLINK libspdk_keyring_linux.so 00:02:43.874 LIB libspdk_scheduler_dpdk_governor.a 00:02:43.874 CC module/accel/iaa/accel_iaa_rpc.o 00:02:44.132 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:44.132 LIB libspdk_scheduler_gscheduler.a 00:02:44.132 SO libspdk_scheduler_gscheduler.so.4.0 00:02:44.132 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.132 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.132 LIB libspdk_vfu_device.a 00:02:44.132 LIB libspdk_accel_iaa.a 00:02:44.132 SO libspdk_vfu_device.so.3.0 00:02:44.132 LIB libspdk_sock_posix.a 00:02:44.132 SO libspdk_accel_iaa.so.3.0 00:02:44.132 LIB libspdk_sock_uring.a 00:02:44.391 SO libspdk_sock_posix.so.6.0 00:02:44.391 SO libspdk_sock_uring.so.5.0 00:02:44.391 CC module/bdev/gpt/gpt.o 00:02:44.391 CC module/bdev/delay/vbdev_delay.o 00:02:44.391 SYMLINK libspdk_accel_iaa.so 00:02:44.391 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:44.391 CC module/bdev/lvol/vbdev_lvol.o 00:02:44.391 CC module/bdev/error/vbdev_error.o 00:02:44.391 SYMLINK libspdk_vfu_device.so 00:02:44.391 CC module/bdev/malloc/bdev_malloc.o 00:02:44.391 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:44.391 SYMLINK libspdk_sock_posix.so 00:02:44.391 SYMLINK libspdk_sock_uring.so 00:02:44.391 CC module/bdev/gpt/vbdev_gpt.o 00:02:44.391 CC module/bdev/error/vbdev_error_rpc.o 00:02:44.391 CC module/blobfs/bdev/blobfs_bdev.o 00:02:44.391 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:44.649 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:44.649 LIB libspdk_bdev_error.a 00:02:44.649 LIB libspdk_bdev_gpt.a 00:02:44.649 LIB libspdk_blobfs_bdev.a 00:02:44.649 SO libspdk_bdev_error.so.6.0 00:02:44.649 CC module/bdev/null/bdev_null.o 00:02:44.649 SO libspdk_blobfs_bdev.so.6.0 00:02:44.649 SO libspdk_bdev_gpt.so.6.0 00:02:44.649 CC module/bdev/nvme/bdev_nvme.o 00:02:44.649 LIB libspdk_bdev_delay.a 00:02:44.649 CC module/bdev/passthru/vbdev_passthru.o 00:02:44.907 SYMLINK libspdk_bdev_error.so 00:02:44.907 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:44.907 SO libspdk_bdev_delay.so.6.0 00:02:44.907 SYMLINK libspdk_blobfs_bdev.so 00:02:44.907 LIB libspdk_bdev_malloc.a 00:02:44.907 SYMLINK libspdk_bdev_gpt.so 00:02:44.907 SO libspdk_bdev_malloc.so.6.0 00:02:44.907 SYMLINK libspdk_bdev_delay.so 00:02:44.907 SYMLINK libspdk_bdev_malloc.so 00:02:44.907 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:44.907 LIB libspdk_bdev_lvol.a 00:02:44.907 CC module/bdev/split/vbdev_split.o 00:02:44.907 CC module/bdev/null/bdev_null_rpc.o 00:02:44.907 SO libspdk_bdev_lvol.so.6.0 00:02:44.907 CC module/bdev/raid/bdev_raid.o 00:02:45.165 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:45.165 CC module/bdev/uring/bdev_uring.o 00:02:45.165 LIB libspdk_bdev_passthru.a 00:02:45.165 SYMLINK libspdk_bdev_lvol.so 00:02:45.165 CC module/bdev/raid/bdev_raid_rpc.o 00:02:45.165 CC module/bdev/aio/bdev_aio.o 00:02:45.165 SO libspdk_bdev_passthru.so.6.0 00:02:45.165 LIB libspdk_bdev_null.a 00:02:45.165 SYMLINK libspdk_bdev_passthru.so 00:02:45.165 CC module/bdev/aio/bdev_aio_rpc.o 00:02:45.165 SO libspdk_bdev_null.so.6.0 00:02:45.165 CC module/bdev/split/vbdev_split_rpc.o 00:02:45.423 SYMLINK libspdk_bdev_null.so 00:02:45.423 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:45.423 CC module/bdev/raid/bdev_raid_sb.o 00:02:45.423 CC module/bdev/raid/raid0.o 00:02:45.423 LIB libspdk_bdev_split.a 00:02:45.423 CC module/bdev/nvme/nvme_rpc.o 00:02:45.423 SO libspdk_bdev_split.so.6.0 00:02:45.423 LIB libspdk_bdev_zone_block.a 00:02:45.423 LIB libspdk_bdev_aio.a 00:02:45.681 CC module/bdev/uring/bdev_uring_rpc.o 00:02:45.681 SO libspdk_bdev_zone_block.so.6.0 00:02:45.681 SO libspdk_bdev_aio.so.6.0 00:02:45.681 SYMLINK libspdk_bdev_split.so 00:02:45.681 SYMLINK libspdk_bdev_zone_block.so 00:02:45.681 SYMLINK libspdk_bdev_aio.so 00:02:45.681 CC module/bdev/raid/raid1.o 00:02:45.681 CC module/bdev/nvme/bdev_mdns_client.o 00:02:45.681 CC module/bdev/raid/concat.o 00:02:45.681 CC module/bdev/nvme/vbdev_opal.o 00:02:45.681 LIB libspdk_bdev_uring.a 00:02:45.940 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:45.940 CC module/bdev/ftl/bdev_ftl.o 00:02:45.940 SO libspdk_bdev_uring.so.6.0 00:02:45.940 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:45.940 SYMLINK libspdk_bdev_uring.so 00:02:45.940 CC module/bdev/iscsi/bdev_iscsi.o 00:02:45.940 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:45.940 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:46.198 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:46.198 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:46.198 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:46.198 LIB libspdk_bdev_ftl.a 00:02:46.198 LIB libspdk_bdev_raid.a 00:02:46.198 SO libspdk_bdev_ftl.so.6.0 00:02:46.456 SO libspdk_bdev_raid.so.6.0 00:02:46.456 SYMLINK libspdk_bdev_ftl.so 00:02:46.456 LIB libspdk_bdev_iscsi.a 00:02:46.456 SO libspdk_bdev_iscsi.so.6.0 00:02:46.456 SYMLINK libspdk_bdev_raid.so 00:02:46.456 SYMLINK libspdk_bdev_iscsi.so 00:02:46.715 LIB libspdk_bdev_virtio.a 00:02:46.715 SO libspdk_bdev_virtio.so.6.0 00:02:46.973 SYMLINK libspdk_bdev_virtio.so 00:02:47.906 LIB libspdk_bdev_nvme.a 00:02:47.906 SO libspdk_bdev_nvme.so.7.0 00:02:47.906 SYMLINK libspdk_bdev_nvme.so 00:02:48.471 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:48.471 CC module/event/subsystems/iobuf/iobuf.o 00:02:48.471 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:48.471 CC module/event/subsystems/vmd/vmd.o 00:02:48.471 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:48.471 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:48.471 CC module/event/subsystems/sock/sock.o 00:02:48.471 CC module/event/subsystems/keyring/keyring.o 00:02:48.471 CC module/event/subsystems/scheduler/scheduler.o 00:02:48.471 LIB libspdk_event_keyring.a 00:02:48.471 LIB libspdk_event_vhost_blk.a 00:02:48.471 LIB libspdk_event_vfu_tgt.a 00:02:48.471 LIB libspdk_event_vmd.a 00:02:48.471 LIB libspdk_event_sock.a 00:02:48.471 SO libspdk_event_keyring.so.1.0 00:02:48.471 LIB libspdk_event_scheduler.a 00:02:48.471 LIB libspdk_event_iobuf.a 00:02:48.471 SO libspdk_event_vhost_blk.so.3.0 00:02:48.728 SO libspdk_event_vfu_tgt.so.3.0 00:02:48.728 SO libspdk_event_vmd.so.6.0 00:02:48.728 SO libspdk_event_sock.so.5.0 00:02:48.728 SO libspdk_event_scheduler.so.4.0 00:02:48.728 SO libspdk_event_iobuf.so.3.0 00:02:48.728 SYMLINK libspdk_event_keyring.so 00:02:48.728 SYMLINK libspdk_event_vhost_blk.so 00:02:48.728 SYMLINK libspdk_event_vfu_tgt.so 00:02:48.728 SYMLINK libspdk_event_vmd.so 00:02:48.728 SYMLINK libspdk_event_sock.so 00:02:48.728 SYMLINK libspdk_event_scheduler.so 00:02:48.728 SYMLINK libspdk_event_iobuf.so 00:02:48.984 CC module/event/subsystems/accel/accel.o 00:02:49.241 LIB libspdk_event_accel.a 00:02:49.241 SO libspdk_event_accel.so.6.0 00:02:49.242 SYMLINK libspdk_event_accel.so 00:02:49.499 CC module/event/subsystems/bdev/bdev.o 00:02:49.756 LIB libspdk_event_bdev.a 00:02:49.756 SO libspdk_event_bdev.so.6.0 00:02:49.756 SYMLINK libspdk_event_bdev.so 00:02:50.014 CC module/event/subsystems/ublk/ublk.o 00:02:50.014 CC module/event/subsystems/scsi/scsi.o 00:02:50.014 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:50.014 CC module/event/subsystems/nbd/nbd.o 00:02:50.014 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:50.271 LIB libspdk_event_ublk.a 00:02:50.271 LIB libspdk_event_nbd.a 00:02:50.271 LIB libspdk_event_scsi.a 00:02:50.271 SO libspdk_event_ublk.so.3.0 00:02:50.271 SO libspdk_event_nbd.so.6.0 00:02:50.271 SO libspdk_event_scsi.so.6.0 00:02:50.271 SYMLINK libspdk_event_ublk.so 00:02:50.271 SYMLINK libspdk_event_nbd.so 00:02:50.271 SYMLINK libspdk_event_scsi.so 00:02:50.271 LIB libspdk_event_nvmf.a 00:02:50.529 SO libspdk_event_nvmf.so.6.0 00:02:50.529 SYMLINK libspdk_event_nvmf.so 00:02:50.529 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:50.529 CC module/event/subsystems/iscsi/iscsi.o 00:02:50.787 LIB libspdk_event_vhost_scsi.a 00:02:50.787 SO libspdk_event_vhost_scsi.so.3.0 00:02:50.787 LIB libspdk_event_iscsi.a 00:02:50.787 SO libspdk_event_iscsi.so.6.0 00:02:50.787 SYMLINK libspdk_event_vhost_scsi.so 00:02:51.045 SYMLINK libspdk_event_iscsi.so 00:02:51.045 SO libspdk.so.6.0 00:02:51.045 SYMLINK libspdk.so 00:02:51.303 CXX app/trace/trace.o 00:02:51.303 CC app/spdk_lspci/spdk_lspci.o 00:02:51.303 CC app/trace_record/trace_record.o 00:02:51.303 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:51.303 CC app/nvmf_tgt/nvmf_main.o 00:02:51.303 CC examples/util/zipf/zipf.o 00:02:51.303 CC app/iscsi_tgt/iscsi_tgt.o 00:02:51.303 CC app/spdk_tgt/spdk_tgt.o 00:02:51.303 CC test/thread/poller_perf/poller_perf.o 00:02:51.303 CC examples/ioat/perf/perf.o 00:02:51.560 LINK spdk_lspci 00:02:51.560 LINK interrupt_tgt 00:02:51.560 LINK poller_perf 00:02:51.560 LINK zipf 00:02:51.560 LINK nvmf_tgt 00:02:51.560 LINK spdk_tgt 00:02:51.818 LINK spdk_trace_record 00:02:51.818 LINK iscsi_tgt 00:02:51.818 LINK spdk_trace 00:02:51.818 LINK ioat_perf 00:02:51.818 CC app/spdk_nvme_perf/perf.o 00:02:51.818 TEST_HEADER include/spdk/accel.h 00:02:51.818 TEST_HEADER include/spdk/accel_module.h 00:02:51.818 TEST_HEADER include/spdk/assert.h 00:02:51.818 TEST_HEADER include/spdk/barrier.h 00:02:51.818 TEST_HEADER include/spdk/base64.h 00:02:51.818 TEST_HEADER include/spdk/bdev.h 00:02:51.818 TEST_HEADER include/spdk/bdev_module.h 00:02:52.076 TEST_HEADER include/spdk/bdev_zone.h 00:02:52.076 TEST_HEADER include/spdk/bit_array.h 00:02:52.076 TEST_HEADER include/spdk/bit_pool.h 00:02:52.076 TEST_HEADER include/spdk/blob_bdev.h 00:02:52.076 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:52.076 TEST_HEADER include/spdk/blobfs.h 00:02:52.076 TEST_HEADER include/spdk/blob.h 00:02:52.076 TEST_HEADER include/spdk/conf.h 00:02:52.076 TEST_HEADER include/spdk/config.h 00:02:52.076 TEST_HEADER include/spdk/cpuset.h 00:02:52.076 TEST_HEADER include/spdk/crc16.h 00:02:52.076 TEST_HEADER include/spdk/crc32.h 00:02:52.076 TEST_HEADER include/spdk/crc64.h 00:02:52.076 TEST_HEADER include/spdk/dif.h 00:02:52.076 TEST_HEADER include/spdk/dma.h 00:02:52.076 TEST_HEADER include/spdk/endian.h 00:02:52.076 TEST_HEADER include/spdk/env_dpdk.h 00:02:52.076 CC app/spdk_nvme_identify/identify.o 00:02:52.076 TEST_HEADER include/spdk/env.h 00:02:52.076 TEST_HEADER include/spdk/event.h 00:02:52.076 TEST_HEADER include/spdk/fd_group.h 00:02:52.076 TEST_HEADER include/spdk/fd.h 00:02:52.076 TEST_HEADER include/spdk/file.h 00:02:52.076 TEST_HEADER include/spdk/ftl.h 00:02:52.076 TEST_HEADER include/spdk/gpt_spec.h 00:02:52.076 TEST_HEADER include/spdk/hexlify.h 00:02:52.076 TEST_HEADER include/spdk/histogram_data.h 00:02:52.076 TEST_HEADER include/spdk/idxd.h 00:02:52.076 TEST_HEADER include/spdk/idxd_spec.h 00:02:52.076 TEST_HEADER include/spdk/init.h 00:02:52.076 TEST_HEADER include/spdk/ioat.h 00:02:52.076 TEST_HEADER include/spdk/ioat_spec.h 00:02:52.076 TEST_HEADER include/spdk/iscsi_spec.h 00:02:52.076 TEST_HEADER include/spdk/json.h 00:02:52.076 CC examples/ioat/verify/verify.o 00:02:52.076 TEST_HEADER include/spdk/jsonrpc.h 00:02:52.076 TEST_HEADER include/spdk/keyring.h 00:02:52.076 TEST_HEADER include/spdk/keyring_module.h 00:02:52.076 TEST_HEADER include/spdk/likely.h 00:02:52.076 TEST_HEADER include/spdk/log.h 00:02:52.076 TEST_HEADER include/spdk/lvol.h 00:02:52.076 TEST_HEADER include/spdk/memory.h 00:02:52.076 TEST_HEADER include/spdk/mmio.h 00:02:52.076 CC test/dma/test_dma/test_dma.o 00:02:52.076 TEST_HEADER include/spdk/nbd.h 00:02:52.076 TEST_HEADER include/spdk/notify.h 00:02:52.076 TEST_HEADER include/spdk/nvme.h 00:02:52.076 TEST_HEADER include/spdk/nvme_intel.h 00:02:52.076 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:52.076 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:52.076 TEST_HEADER include/spdk/nvme_spec.h 00:02:52.076 TEST_HEADER include/spdk/nvme_zns.h 00:02:52.077 CC app/spdk_nvme_discover/discovery_aer.o 00:02:52.077 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:52.077 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:52.077 TEST_HEADER include/spdk/nvmf.h 00:02:52.077 CC test/app/bdev_svc/bdev_svc.o 00:02:52.077 TEST_HEADER include/spdk/nvmf_spec.h 00:02:52.077 TEST_HEADER include/spdk/nvmf_transport.h 00:02:52.077 TEST_HEADER include/spdk/opal.h 00:02:52.077 TEST_HEADER include/spdk/opal_spec.h 00:02:52.077 TEST_HEADER include/spdk/pci_ids.h 00:02:52.077 TEST_HEADER include/spdk/pipe.h 00:02:52.077 TEST_HEADER include/spdk/queue.h 00:02:52.077 TEST_HEADER include/spdk/reduce.h 00:02:52.077 TEST_HEADER include/spdk/rpc.h 00:02:52.077 TEST_HEADER include/spdk/scheduler.h 00:02:52.077 TEST_HEADER include/spdk/scsi.h 00:02:52.077 TEST_HEADER include/spdk/scsi_spec.h 00:02:52.077 TEST_HEADER include/spdk/sock.h 00:02:52.077 TEST_HEADER include/spdk/stdinc.h 00:02:52.077 TEST_HEADER include/spdk/string.h 00:02:52.077 TEST_HEADER include/spdk/thread.h 00:02:52.077 TEST_HEADER include/spdk/trace.h 00:02:52.077 TEST_HEADER include/spdk/trace_parser.h 00:02:52.077 TEST_HEADER include/spdk/tree.h 00:02:52.077 TEST_HEADER include/spdk/ublk.h 00:02:52.077 TEST_HEADER include/spdk/util.h 00:02:52.077 TEST_HEADER include/spdk/uuid.h 00:02:52.077 TEST_HEADER include/spdk/version.h 00:02:52.077 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:52.077 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:52.077 TEST_HEADER include/spdk/vhost.h 00:02:52.077 TEST_HEADER include/spdk/vmd.h 00:02:52.077 TEST_HEADER include/spdk/xor.h 00:02:52.077 TEST_HEADER include/spdk/zipf.h 00:02:52.077 CXX test/cpp_headers/accel.o 00:02:52.335 CC examples/vmd/lsvmd/lsvmd.o 00:02:52.335 CC examples/sock/hello_world/hello_sock.o 00:02:52.335 CC examples/thread/thread/thread_ex.o 00:02:52.335 LINK bdev_svc 00:02:52.335 LINK verify 00:02:52.335 LINK spdk_nvme_discover 00:02:52.335 CXX test/cpp_headers/accel_module.o 00:02:52.335 LINK lsvmd 00:02:52.593 LINK test_dma 00:02:52.593 CXX test/cpp_headers/assert.o 00:02:52.593 CXX test/cpp_headers/barrier.o 00:02:52.593 LINK hello_sock 00:02:52.593 CC examples/vmd/led/led.o 00:02:52.593 LINK thread 00:02:52.593 CC test/app/histogram_perf/histogram_perf.o 00:02:52.593 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:52.593 CXX test/cpp_headers/base64.o 00:02:52.850 LINK led 00:02:52.851 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:52.851 CC test/app/jsoncat/jsoncat.o 00:02:52.851 CC test/app/stub/stub.o 00:02:52.851 LINK histogram_perf 00:02:52.851 CXX test/cpp_headers/bdev.o 00:02:52.851 LINK jsoncat 00:02:53.108 LINK stub 00:02:53.108 LINK spdk_nvme_identify 00:02:53.108 LINK spdk_nvme_perf 00:02:53.108 CXX test/cpp_headers/bdev_module.o 00:02:53.108 CC test/env/mem_callbacks/mem_callbacks.o 00:02:53.108 CC examples/idxd/perf/perf.o 00:02:53.109 CC examples/accel/perf/accel_perf.o 00:02:53.109 LINK nvme_fuzz 00:02:53.366 CXX test/cpp_headers/bdev_zone.o 00:02:53.366 CC test/event/event_perf/event_perf.o 00:02:53.366 CC test/event/reactor/reactor.o 00:02:53.366 CC app/spdk_top/spdk_top.o 00:02:53.624 LINK reactor 00:02:53.624 LINK event_perf 00:02:53.624 LINK idxd_perf 00:02:53.624 CXX test/cpp_headers/bit_array.o 00:02:53.624 CC app/vhost/vhost.o 00:02:53.624 CC examples/blob/hello_world/hello_blob.o 00:02:53.624 CXX test/cpp_headers/bit_pool.o 00:02:53.624 CXX test/cpp_headers/blob_bdev.o 00:02:53.881 LINK mem_callbacks 00:02:53.881 CC test/event/reactor_perf/reactor_perf.o 00:02:53.881 LINK vhost 00:02:53.881 LINK accel_perf 00:02:53.881 CC examples/blob/cli/blobcli.o 00:02:53.881 LINK hello_blob 00:02:53.881 CXX test/cpp_headers/blobfs_bdev.o 00:02:53.881 LINK reactor_perf 00:02:53.881 CC test/env/vtophys/vtophys.o 00:02:54.139 CXX test/cpp_headers/blobfs.o 00:02:54.139 CC examples/nvme/hello_world/hello_world.o 00:02:54.139 CC examples/nvme/reconnect/reconnect.o 00:02:54.139 LINK vtophys 00:02:54.139 CXX test/cpp_headers/blob.o 00:02:54.139 CC test/event/app_repeat/app_repeat.o 00:02:54.139 CC app/spdk_dd/spdk_dd.o 00:02:54.397 CC app/fio/nvme/fio_plugin.o 00:02:54.397 LINK hello_world 00:02:54.397 LINK app_repeat 00:02:54.397 CXX test/cpp_headers/conf.o 00:02:54.397 LINK blobcli 00:02:54.397 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:54.655 LINK spdk_top 00:02:54.655 LINK reconnect 00:02:54.655 CXX test/cpp_headers/config.o 00:02:54.655 CXX test/cpp_headers/cpuset.o 00:02:54.655 LINK env_dpdk_post_init 00:02:54.655 CXX test/cpp_headers/crc16.o 00:02:54.655 CC test/event/scheduler/scheduler.o 00:02:54.913 CC app/fio/bdev/fio_plugin.o 00:02:54.913 LINK spdk_dd 00:02:54.913 CC examples/bdev/hello_world/hello_bdev.o 00:02:54.913 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:54.913 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:54.913 CXX test/cpp_headers/crc32.o 00:02:54.913 CC test/env/memory/memory_ut.o 00:02:54.913 LINK scheduler 00:02:55.170 CXX test/cpp_headers/crc64.o 00:02:55.171 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:55.171 LINK iscsi_fuzz 00:02:55.171 LINK spdk_nvme 00:02:55.171 LINK hello_bdev 00:02:55.171 CXX test/cpp_headers/dif.o 00:02:55.171 CXX test/cpp_headers/dma.o 00:02:55.171 CC examples/bdev/bdevperf/bdevperf.o 00:02:55.429 CXX test/cpp_headers/endian.o 00:02:55.429 CXX test/cpp_headers/env_dpdk.o 00:02:55.429 CXX test/cpp_headers/env.o 00:02:55.429 LINK spdk_bdev 00:02:55.429 CC examples/nvme/arbitration/arbitration.o 00:02:55.429 CC test/nvme/aer/aer.o 00:02:55.429 LINK nvme_manage 00:02:55.429 CXX test/cpp_headers/event.o 00:02:55.686 LINK vhost_fuzz 00:02:55.686 CC test/nvme/reset/reset.o 00:02:55.686 CC test/nvme/sgl/sgl.o 00:02:55.686 CC test/env/pci/pci_ut.o 00:02:55.686 CXX test/cpp_headers/fd_group.o 00:02:55.686 CC test/nvme/e2edp/nvme_dp.o 00:02:55.945 CC test/rpc_client/rpc_client_test.o 00:02:55.945 LINK aer 00:02:55.945 LINK arbitration 00:02:55.945 CXX test/cpp_headers/fd.o 00:02:55.945 LINK reset 00:02:55.945 LINK sgl 00:02:55.945 CXX test/cpp_headers/file.o 00:02:55.945 LINK rpc_client_test 00:02:56.203 LINK nvme_dp 00:02:56.203 CC examples/nvme/hotplug/hotplug.o 00:02:56.203 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:56.203 LINK pci_ut 00:02:56.203 CXX test/cpp_headers/ftl.o 00:02:56.203 CXX test/cpp_headers/gpt_spec.o 00:02:56.203 LINK bdevperf 00:02:56.203 LINK memory_ut 00:02:56.203 CC test/nvme/overhead/overhead.o 00:02:56.462 CC test/nvme/err_injection/err_injection.o 00:02:56.462 LINK cmb_copy 00:02:56.462 CC test/nvme/startup/startup.o 00:02:56.462 CXX test/cpp_headers/hexlify.o 00:02:56.462 LINK hotplug 00:02:56.462 CC test/nvme/reserve/reserve.o 00:02:56.462 CXX test/cpp_headers/histogram_data.o 00:02:56.462 CXX test/cpp_headers/idxd.o 00:02:56.462 CXX test/cpp_headers/idxd_spec.o 00:02:56.462 LINK err_injection 00:02:56.722 CXX test/cpp_headers/init.o 00:02:56.722 LINK startup 00:02:56.722 LINK overhead 00:02:56.722 LINK reserve 00:02:56.722 CC examples/nvme/abort/abort.o 00:02:56.722 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:56.722 CXX test/cpp_headers/ioat.o 00:02:56.980 CC test/accel/dif/dif.o 00:02:56.980 CC test/nvme/simple_copy/simple_copy.o 00:02:56.980 CC test/nvme/connect_stress/connect_stress.o 00:02:56.980 CC test/nvme/boot_partition/boot_partition.o 00:02:56.980 CC test/blobfs/mkfs/mkfs.o 00:02:56.980 CXX test/cpp_headers/ioat_spec.o 00:02:56.980 CC test/lvol/esnap/esnap.o 00:02:56.980 LINK pmr_persistence 00:02:56.980 CC test/nvme/compliance/nvme_compliance.o 00:02:57.238 LINK boot_partition 00:02:57.238 LINK connect_stress 00:02:57.238 CXX test/cpp_headers/iscsi_spec.o 00:02:57.238 LINK mkfs 00:02:57.238 LINK simple_copy 00:02:57.238 CXX test/cpp_headers/json.o 00:02:57.238 LINK abort 00:02:57.496 CXX test/cpp_headers/jsonrpc.o 00:02:57.496 CXX test/cpp_headers/keyring.o 00:02:57.496 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.496 LINK dif 00:02:57.496 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:57.496 CC test/nvme/fdp/fdp.o 00:02:57.496 LINK nvme_compliance 00:02:57.496 CC test/nvme/cuse/cuse.o 00:02:57.496 CXX test/cpp_headers/keyring_module.o 00:02:57.496 CXX test/cpp_headers/likely.o 00:02:57.754 LINK doorbell_aers 00:02:57.754 CXX test/cpp_headers/log.o 00:02:57.754 CXX test/cpp_headers/lvol.o 00:02:57.754 LINK fused_ordering 00:02:57.754 CC examples/nvmf/nvmf/nvmf.o 00:02:57.754 CXX test/cpp_headers/memory.o 00:02:57.754 CXX test/cpp_headers/mmio.o 00:02:57.754 CXX test/cpp_headers/nbd.o 00:02:57.754 CXX test/cpp_headers/notify.o 00:02:57.754 CXX test/cpp_headers/nvme.o 00:02:57.754 LINK fdp 00:02:58.011 CXX test/cpp_headers/nvme_intel.o 00:02:58.011 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.011 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.011 CXX test/cpp_headers/nvme_spec.o 00:02:58.011 CXX test/cpp_headers/nvme_zns.o 00:02:58.011 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.011 CC test/bdev/bdevio/bdevio.o 00:02:58.011 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.011 LINK nvmf 00:02:58.268 CXX test/cpp_headers/nvmf.o 00:02:58.268 CXX test/cpp_headers/nvmf_spec.o 00:02:58.268 CXX test/cpp_headers/nvmf_transport.o 00:02:58.268 CXX test/cpp_headers/opal.o 00:02:58.268 CXX test/cpp_headers/opal_spec.o 00:02:58.268 CXX test/cpp_headers/pci_ids.o 00:02:58.268 CXX test/cpp_headers/pipe.o 00:02:58.268 CXX test/cpp_headers/queue.o 00:02:58.525 CXX test/cpp_headers/reduce.o 00:02:58.525 CXX test/cpp_headers/rpc.o 00:02:58.525 CXX test/cpp_headers/scheduler.o 00:02:58.525 CXX test/cpp_headers/scsi.o 00:02:58.525 CXX test/cpp_headers/scsi_spec.o 00:02:58.525 CXX test/cpp_headers/sock.o 00:02:58.525 LINK bdevio 00:02:58.525 CXX test/cpp_headers/stdinc.o 00:02:58.525 CXX test/cpp_headers/string.o 00:02:58.784 CXX test/cpp_headers/thread.o 00:02:58.784 CXX test/cpp_headers/trace.o 00:02:58.784 CXX test/cpp_headers/trace_parser.o 00:02:58.784 CXX test/cpp_headers/tree.o 00:02:58.784 CXX test/cpp_headers/ublk.o 00:02:58.784 CXX test/cpp_headers/util.o 00:02:58.784 CXX test/cpp_headers/uuid.o 00:02:58.784 CXX test/cpp_headers/version.o 00:02:58.784 CXX test/cpp_headers/vfio_user_pci.o 00:02:58.784 CXX test/cpp_headers/vfio_user_spec.o 00:02:58.784 CXX test/cpp_headers/vhost.o 00:02:58.784 CXX test/cpp_headers/vmd.o 00:02:58.784 CXX test/cpp_headers/xor.o 00:02:58.784 CXX test/cpp_headers/zipf.o 00:02:59.042 LINK cuse 00:03:04.326 LINK esnap 00:03:04.584 00:03:04.584 real 1m17.029s 00:03:04.584 user 7m24.616s 00:03:04.584 sys 1m31.397s 00:03:04.584 05:49:20 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:04.584 05:49:20 make -- common/autotest_common.sh@10 -- $ set +x 00:03:04.584 ************************************ 00:03:04.584 END TEST make 00:03:04.584 ************************************ 00:03:04.584 05:49:20 -- common/autotest_common.sh@1142 -- $ return 0 00:03:04.584 05:49:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:04.584 05:49:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:04.584 05:49:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:04.584 05:49:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.584 05:49:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:04.584 05:49:20 -- pm/common@44 -- $ pid=5197 00:03:04.584 05:49:20 -- pm/common@50 -- $ kill -TERM 5197 00:03:04.584 05:49:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.584 05:49:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:04.584 05:49:20 -- pm/common@44 -- $ pid=5199 00:03:04.584 05:49:20 -- pm/common@50 -- $ kill -TERM 5199 00:03:04.584 05:49:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:04.584 05:49:20 -- nvmf/common.sh@7 -- # uname -s 00:03:04.584 05:49:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:04.584 05:49:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:04.584 05:49:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:04.584 05:49:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:04.584 05:49:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:04.584 05:49:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:04.584 05:49:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:04.584 05:49:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:04.584 05:49:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:04.584 05:49:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:04.844 05:49:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:03:04.844 05:49:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:03:04.844 05:49:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:04.844 05:49:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:04.844 05:49:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:04.844 05:49:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:04.844 05:49:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:04.844 05:49:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:04.844 05:49:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:04.844 05:49:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:04.844 05:49:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.844 05:49:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.844 05:49:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.844 05:49:20 -- paths/export.sh@5 -- # export PATH 00:03:04.844 05:49:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.844 05:49:20 -- nvmf/common.sh@47 -- # : 0 00:03:04.844 05:49:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:04.844 05:49:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:04.844 05:49:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:04.844 05:49:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:04.844 05:49:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:04.844 05:49:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:04.844 05:49:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:04.844 05:49:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:04.844 05:49:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:04.844 05:49:20 -- spdk/autotest.sh@32 -- # uname -s 00:03:04.844 05:49:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:04.844 05:49:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:04.844 05:49:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:04.844 05:49:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:04.844 05:49:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:04.844 05:49:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:04.844 05:49:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:04.844 05:49:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:04.844 05:49:20 -- spdk/autotest.sh@48 -- # udevadm_pid=53505 00:03:04.844 05:49:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:04.844 05:49:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:04.844 05:49:20 -- pm/common@17 -- # local monitor 00:03:04.844 05:49:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.844 05:49:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.844 05:49:20 -- pm/common@25 -- # sleep 1 00:03:04.844 05:49:20 -- pm/common@21 -- # date +%s 00:03:04.844 05:49:20 -- pm/common@21 -- # date +%s 00:03:04.844 05:49:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720676960 00:03:04.844 05:49:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720676960 00:03:04.844 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720676960_collect-vmstat.pm.log 00:03:04.844 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720676960_collect-cpu-load.pm.log 00:03:05.780 05:49:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:05.780 05:49:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:05.780 05:49:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:05.780 05:49:21 -- common/autotest_common.sh@10 -- # set +x 00:03:05.780 05:49:21 -- spdk/autotest.sh@59 -- # create_test_list 00:03:05.780 05:49:21 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:05.780 05:49:21 -- common/autotest_common.sh@10 -- # set +x 00:03:05.780 05:49:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:05.780 05:49:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:05.780 05:49:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:05.780 05:49:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:05.780 05:49:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:05.780 05:49:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:05.780 05:49:21 -- common/autotest_common.sh@1455 -- # uname 00:03:05.780 05:49:21 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:05.780 05:49:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:05.780 05:49:21 -- common/autotest_common.sh@1475 -- # uname 00:03:05.780 05:49:21 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:05.780 05:49:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:05.781 05:49:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:05.781 05:49:21 -- spdk/autotest.sh@72 -- # hash lcov 00:03:05.781 05:49:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:05.781 05:49:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:05.781 --rc lcov_branch_coverage=1 00:03:05.781 --rc lcov_function_coverage=1 00:03:05.781 --rc genhtml_branch_coverage=1 00:03:05.781 --rc genhtml_function_coverage=1 00:03:05.781 --rc genhtml_legend=1 00:03:05.781 --rc geninfo_all_blocks=1 00:03:05.781 ' 00:03:05.781 05:49:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:05.781 --rc lcov_branch_coverage=1 00:03:05.781 --rc lcov_function_coverage=1 00:03:05.781 --rc genhtml_branch_coverage=1 00:03:05.781 --rc genhtml_function_coverage=1 00:03:05.781 --rc genhtml_legend=1 00:03:05.781 --rc geninfo_all_blocks=1 00:03:05.781 ' 00:03:05.781 05:49:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:05.781 --rc lcov_branch_coverage=1 00:03:05.781 --rc lcov_function_coverage=1 00:03:05.781 --rc genhtml_branch_coverage=1 00:03:05.781 --rc genhtml_function_coverage=1 00:03:05.781 --rc genhtml_legend=1 00:03:05.781 --rc geninfo_all_blocks=1 00:03:05.781 --no-external' 00:03:05.781 05:49:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:05.781 --rc lcov_branch_coverage=1 00:03:05.781 --rc lcov_function_coverage=1 00:03:05.781 --rc genhtml_branch_coverage=1 00:03:05.781 --rc genhtml_function_coverage=1 00:03:05.781 --rc genhtml_legend=1 00:03:05.781 --rc geninfo_all_blocks=1 00:03:05.781 --no-external' 00:03:05.781 05:49:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:06.039 lcov: LCOV version 1.14 00:03:06.039 05:49:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:20.913 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:20.913 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:35.816 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:35.816 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:35.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:35.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:38.351 05:49:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:38.351 05:49:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:38.351 05:49:53 -- common/autotest_common.sh@10 -- # set +x 00:03:38.351 05:49:53 -- spdk/autotest.sh@91 -- # rm -f 00:03:38.351 05:49:53 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:38.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:38.870 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:38.870 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:38.870 05:49:54 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:38.870 05:49:54 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:38.870 05:49:54 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:38.870 05:49:54 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:38.870 05:49:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.870 05:49:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:38.870 05:49:54 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:38.870 05:49:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.870 05:49:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.870 05:49:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.870 05:49:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:38.870 05:49:54 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:38.870 05:49:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:38.870 05:49:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.870 05:49:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.870 05:49:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:38.870 05:49:54 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:38.870 05:49:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:38.870 05:49:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.870 05:49:54 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.870 05:49:54 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:38.870 05:49:54 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:38.870 05:49:54 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:38.870 05:49:54 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.870 05:49:54 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:38.870 05:49:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.870 05:49:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:38.870 05:49:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:38.870 05:49:54 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:38.870 05:49:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:38.870 No valid GPT data, bailing 00:03:38.870 05:49:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:38.870 05:49:54 -- scripts/common.sh@391 -- # pt= 00:03:38.870 05:49:54 -- scripts/common.sh@392 -- # return 1 00:03:38.870 05:49:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:38.870 1+0 records in 00:03:38.870 1+0 records out 00:03:38.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00375707 s, 279 MB/s 00:03:38.870 05:49:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.870 05:49:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:38.870 05:49:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:38.870 05:49:54 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:38.870 05:49:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:38.870 No valid GPT data, bailing 00:03:38.870 05:49:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:38.870 05:49:54 -- scripts/common.sh@391 -- # pt= 00:03:38.870 05:49:54 -- scripts/common.sh@392 -- # return 1 00:03:38.870 05:49:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:38.870 1+0 records in 00:03:38.870 1+0 records out 00:03:38.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00404475 s, 259 MB/s 00:03:38.870 05:49:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.870 05:49:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:38.870 05:49:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:38.870 05:49:54 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:38.870 05:49:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:38.870 No valid GPT data, bailing 00:03:38.870 05:49:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:39.130 05:49:54 -- scripts/common.sh@391 -- # pt= 00:03:39.130 05:49:54 -- scripts/common.sh@392 -- # return 1 00:03:39.130 05:49:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:39.130 1+0 records in 00:03:39.130 1+0 records out 00:03:39.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421146 s, 249 MB/s 00:03:39.130 05:49:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:39.130 05:49:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:39.130 05:49:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:39.130 05:49:54 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:39.130 05:49:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:39.130 No valid GPT data, bailing 00:03:39.130 05:49:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:39.130 05:49:54 -- scripts/common.sh@391 -- # pt= 00:03:39.130 05:49:54 -- scripts/common.sh@392 -- # return 1 00:03:39.130 05:49:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:39.130 1+0 records in 00:03:39.130 1+0 records out 00:03:39.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441964 s, 237 MB/s 00:03:39.130 05:49:54 -- spdk/autotest.sh@118 -- # sync 00:03:39.130 05:49:54 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:39.130 05:49:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:39.130 05:49:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:41.035 05:49:56 -- spdk/autotest.sh@124 -- # uname -s 00:03:41.035 05:49:56 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:41.035 05:49:56 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:41.035 05:49:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.035 05:49:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.035 05:49:56 -- common/autotest_common.sh@10 -- # set +x 00:03:41.035 ************************************ 00:03:41.035 START TEST setup.sh 00:03:41.035 ************************************ 00:03:41.035 05:49:56 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:41.035 * Looking for test storage... 00:03:41.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:41.035 05:49:56 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:41.035 05:49:56 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:41.035 05:49:56 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:41.035 05:49:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.035 05:49:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.035 05:49:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.035 ************************************ 00:03:41.035 START TEST acl 00:03:41.035 ************************************ 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:41.035 * Looking for test storage... 00:03:41.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:41.035 05:49:56 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:41.035 05:49:56 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.035 05:49:56 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:41.035 05:49:56 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:41.035 05:49:56 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:41.035 05:49:56 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:41.035 05:49:56 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:41.035 05:49:56 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.035 05:49:56 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.604 05:49:57 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:41.604 05:49:57 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:41.604 05:49:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.604 05:49:57 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:41.604 05:49:57 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.604 05:49:57 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.541 Hugepages 00:03:42.541 node hugesize free / total 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.541 00:03:42.541 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:42.541 05:49:58 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:42.541 05:49:58 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.541 05:49:58 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.541 05:49:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.541 ************************************ 00:03:42.541 START TEST denied 00:03:42.541 ************************************ 00:03:42.541 05:49:58 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:42.541 05:49:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:42.541 05:49:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:42.541 05:49:58 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:42.541 05:49:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.541 05:49:58 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:43.478 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:43.478 05:49:59 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:43.478 05:49:59 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:43.478 05:49:59 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:43.478 05:49:59 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:43.478 05:49:59 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:43.478 05:49:59 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:43.478 05:49:59 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:43.478 05:49:59 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:43.478 05:49:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.478 05:49:59 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:44.045 00:03:44.045 real 0m1.361s 00:03:44.045 user 0m0.553s 00:03:44.045 sys 0m0.760s 00:03:44.045 05:49:59 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.045 05:49:59 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:44.045 ************************************ 00:03:44.045 END TEST denied 00:03:44.045 ************************************ 00:03:44.045 05:49:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:44.045 05:49:59 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:44.045 05:49:59 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.045 05:49:59 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.045 05:49:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.045 ************************************ 00:03:44.045 START TEST allowed 00:03:44.045 ************************************ 00:03:44.045 05:49:59 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:44.045 05:49:59 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:44.045 05:49:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:44.045 05:49:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:44.045 05:49:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.045 05:49:59 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:44.982 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.982 05:50:00 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:44.982 05:50:00 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:44.982 05:50:00 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:44.982 05:50:00 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:44.982 05:50:00 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:44.982 05:50:00 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:44.982 05:50:00 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:44.982 05:50:00 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:44.982 05:50:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.982 05:50:00 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:45.550 00:03:45.550 real 0m1.464s 00:03:45.550 user 0m0.646s 00:03:45.550 sys 0m0.795s 00:03:45.550 ************************************ 00:03:45.550 END TEST allowed 00:03:45.550 ************************************ 00:03:45.550 05:50:01 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.550 05:50:01 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:45.550 05:50:01 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:45.550 ************************************ 00:03:45.550 END TEST acl 00:03:45.550 ************************************ 00:03:45.550 00:03:45.550 real 0m4.579s 00:03:45.550 user 0m2.044s 00:03:45.550 sys 0m2.479s 00:03:45.550 05:50:01 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.550 05:50:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:45.550 05:50:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:45.550 05:50:01 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:45.550 05:50:01 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.550 05:50:01 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.550 05:50:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.550 ************************************ 00:03:45.550 START TEST hugepages 00:03:45.550 ************************************ 00:03:45.550 05:50:01 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:45.550 * Looking for test storage... 00:03:45.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:45.550 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:45.550 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:45.550 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:45.550 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 5799904 kB' 'MemAvailable: 7390220 kB' 'Buffers: 2436 kB' 'Cached: 1804216 kB' 'SwapCached: 0 kB' 'Active: 435520 kB' 'Inactive: 1476080 kB' 'Active(anon): 115436 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 106604 kB' 'Mapped: 48724 kB' 'Shmem: 10488 kB' 'KReclaimable: 62628 kB' 'Slab: 134564 kB' 'SReclaimable: 62628 kB' 'SUnreclaim: 71936 kB' 'KernelStack: 6276 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412428 kB' 'Committed_AS: 335176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.551 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.552 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.810 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.811 05:50:01 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:45.811 05:50:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.811 05:50:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.811 05:50:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.811 ************************************ 00:03:45.811 START TEST default_setup 00:03:45.811 ************************************ 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.811 05:50:01 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.379 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.379 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.379 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.379 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:46.379 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:46.379 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.379 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.379 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:46.379 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:46.379 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:46.379 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7902356 kB' 'MemAvailable: 9492532 kB' 'Buffers: 2436 kB' 'Cached: 1804208 kB' 'SwapCached: 0 kB' 'Active: 452008 kB' 'Inactive: 1476088 kB' 'Active(anon): 131924 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123052 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134196 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71860 kB' 'KernelStack: 6384 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.643 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7902356 kB' 'MemAvailable: 9492532 kB' 'Buffers: 2436 kB' 'Cached: 1804208 kB' 'SwapCached: 0 kB' 'Active: 451712 kB' 'Inactive: 1476088 kB' 'Active(anon): 131628 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122704 kB' 'Mapped: 48504 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134188 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71852 kB' 'KernelStack: 6320 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.644 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.645 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7902104 kB' 'MemAvailable: 9492280 kB' 'Buffers: 2436 kB' 'Cached: 1804208 kB' 'SwapCached: 0 kB' 'Active: 451580 kB' 'Inactive: 1476088 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122616 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134180 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71844 kB' 'KernelStack: 6336 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.646 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.647 nr_hugepages=1024 00:03:46.647 resv_hugepages=0 00:03:46.647 surplus_hugepages=0 00:03:46.647 anon_hugepages=0 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7902104 kB' 'MemAvailable: 9492280 kB' 'Buffers: 2436 kB' 'Cached: 1804208 kB' 'SwapCached: 0 kB' 'Active: 451836 kB' 'Inactive: 1476088 kB' 'Active(anon): 131752 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122872 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134180 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71844 kB' 'KernelStack: 6336 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.647 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7905900 kB' 'MemUsed: 4336052 kB' 'SwapCached: 0 kB' 'Active: 452200 kB' 'Inactive: 1476096 kB' 'Active(anon): 132116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1806644 kB' 'Mapped: 48492 kB' 'AnonPages: 123128 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134184 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.648 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.649 node0=1024 expecting 1024 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.649 ************************************ 00:03:46.649 END TEST default_setup 00:03:46.649 ************************************ 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.649 00:03:46.649 real 0m0.951s 00:03:46.649 user 0m0.464s 00:03:46.649 sys 0m0.422s 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.649 05:50:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:46.649 05:50:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:46.649 05:50:02 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:46.649 05:50:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.649 05:50:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.649 05:50:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.649 ************************************ 00:03:46.649 START TEST per_node_1G_alloc 00:03:46.649 ************************************ 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.649 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.222 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.222 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.222 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 8952740 kB' 'MemAvailable: 10542924 kB' 'Buffers: 2436 kB' 'Cached: 1804208 kB' 'SwapCached: 0 kB' 'Active: 452136 kB' 'Inactive: 1476096 kB' 'Active(anon): 132052 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123180 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134128 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71792 kB' 'KernelStack: 6324 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.223 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 8953040 kB' 'MemAvailable: 10543224 kB' 'Buffers: 2436 kB' 'Cached: 1804208 kB' 'SwapCached: 0 kB' 'Active: 451776 kB' 'Inactive: 1476096 kB' 'Active(anon): 131692 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122832 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134092 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71756 kB' 'KernelStack: 6352 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.224 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.225 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.226 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 8953416 kB' 'MemAvailable: 10543600 kB' 'Buffers: 2436 kB' 'Cached: 1804208 kB' 'SwapCached: 0 kB' 'Active: 452112 kB' 'Inactive: 1476096 kB' 'Active(anon): 132028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123008 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134092 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71756 kB' 'KernelStack: 6384 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 351940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.227 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.229 nr_hugepages=512 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:47.229 resv_hugepages=0 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.229 surplus_hugepages=0 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.229 anon_hugepages=0 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 8953164 kB' 'MemAvailable: 10543348 kB' 'Buffers: 2436 kB' 'Cached: 1804208 kB' 'SwapCached: 0 kB' 'Active: 451548 kB' 'Inactive: 1476096 kB' 'Active(anon): 131464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122644 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134092 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71756 kB' 'KernelStack: 6336 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.229 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 8953572 kB' 'MemUsed: 3288380 kB' 'SwapCached: 0 kB' 'Active: 451756 kB' 'Inactive: 1476096 kB' 'Active(anon): 131672 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1806644 kB' 'Mapped: 48476 kB' 'AnonPages: 122852 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134092 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.232 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.233 node0=512 expecting 512 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.233 00:03:47.233 real 0m0.541s 00:03:47.233 user 0m0.275s 00:03:47.233 sys 0m0.283s 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.233 05:50:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.233 ************************************ 00:03:47.233 END TEST per_node_1G_alloc 00:03:47.233 ************************************ 00:03:47.233 05:50:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.233 05:50:03 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:47.233 05:50:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.233 05:50:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.233 05:50:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.233 ************************************ 00:03:47.233 START TEST even_2G_alloc 00:03:47.233 ************************************ 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.233 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.806 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.806 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7908928 kB' 'MemAvailable: 9499108 kB' 'Buffers: 2436 kB' 'Cached: 1804204 kB' 'SwapCached: 0 kB' 'Active: 452084 kB' 'Inactive: 1476092 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123452 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134128 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71792 kB' 'KernelStack: 6320 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.806 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7909180 kB' 'MemAvailable: 9499360 kB' 'Buffers: 2436 kB' 'Cached: 1804204 kB' 'SwapCached: 0 kB' 'Active: 451848 kB' 'Inactive: 1476092 kB' 'Active(anon): 131764 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122900 kB' 'Mapped: 48676 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134132 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71796 kB' 'KernelStack: 6272 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7909180 kB' 'MemAvailable: 9499364 kB' 'Buffers: 2436 kB' 'Cached: 1804208 kB' 'SwapCached: 0 kB' 'Active: 451676 kB' 'Inactive: 1476096 kB' 'Active(anon): 131592 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122948 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134120 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71784 kB' 'KernelStack: 6288 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.809 nr_hugepages=1024 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.809 resv_hugepages=0 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.809 surplus_hugepages=0 00:03:47.809 anon_hugepages=0 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7909180 kB' 'MemAvailable: 9499364 kB' 'Buffers: 2436 kB' 'Cached: 1804208 kB' 'SwapCached: 0 kB' 'Active: 451524 kB' 'Inactive: 1476096 kB' 'Active(anon): 131440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122600 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134096 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71760 kB' 'KernelStack: 6352 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7909180 kB' 'MemUsed: 4332772 kB' 'SwapCached: 0 kB' 'Active: 451784 kB' 'Inactive: 1476096 kB' 'Active(anon): 131700 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1806644 kB' 'Mapped: 48476 kB' 'AnonPages: 122860 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134096 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.810 node0=1024 expecting 1024 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.810 00:03:47.810 real 0m0.505s 00:03:47.810 user 0m0.263s 00:03:47.810 sys 0m0.274s 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.810 05:50:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.810 ************************************ 00:03:47.810 END TEST even_2G_alloc 00:03:47.810 ************************************ 00:03:47.810 05:50:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.810 05:50:03 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:47.810 05:50:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.810 05:50:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.810 05:50:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.810 ************************************ 00:03:47.810 START TEST odd_alloc 00:03:47.810 ************************************ 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.810 05:50:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.382 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.382 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7911076 kB' 'MemAvailable: 9501264 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451860 kB' 'Inactive: 1476100 kB' 'Active(anon): 131776 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123216 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134244 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71908 kB' 'KernelStack: 6372 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459980 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.382 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.383 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7910824 kB' 'MemAvailable: 9501012 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451484 kB' 'Inactive: 1476100 kB' 'Active(anon): 131400 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134248 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71912 kB' 'KernelStack: 6336 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459980 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.384 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.385 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7910824 kB' 'MemAvailable: 9501012 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451540 kB' 'Inactive: 1476100 kB' 'Active(anon): 131456 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122868 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134244 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71908 kB' 'KernelStack: 6352 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459980 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.386 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.387 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:48.388 nr_hugepages=1025 00:03:48.388 resv_hugepages=0 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.388 surplus_hugepages=0 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.388 anon_hugepages=0 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.388 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7910824 kB' 'MemAvailable: 9501012 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451576 kB' 'Inactive: 1476100 kB' 'Active(anon): 131492 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122888 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134240 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71904 kB' 'KernelStack: 6336 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459980 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.389 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.390 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7910824 kB' 'MemUsed: 4331128 kB' 'SwapCached: 0 kB' 'Active: 451532 kB' 'Inactive: 1476100 kB' 'Active(anon): 131448 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1806648 kB' 'Mapped: 48476 kB' 'AnonPages: 122868 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134240 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.391 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.392 node0=1025 expecting 1025 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:48.392 00:03:48.392 real 0m0.555s 00:03:48.392 user 0m0.282s 00:03:48.392 sys 0m0.280s 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.392 05:50:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.392 ************************************ 00:03:48.392 END TEST odd_alloc 00:03:48.392 ************************************ 00:03:48.392 05:50:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.392 05:50:04 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:48.392 05:50:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.392 05:50:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.392 05:50:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.392 ************************************ 00:03:48.392 START TEST custom_alloc 00:03:48.392 ************************************ 00:03:48.392 05:50:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:48.392 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:48.392 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:48.392 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:48.392 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.393 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.968 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.968 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 8960468 kB' 'MemAvailable: 10550656 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 452096 kB' 'Inactive: 1476100 kB' 'Active(anon): 132012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123092 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134284 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71948 kB' 'KernelStack: 6308 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.968 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 8960484 kB' 'MemAvailable: 10550672 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451760 kB' 'Inactive: 1476100 kB' 'Active(anon): 131676 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122748 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134280 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71944 kB' 'KernelStack: 6324 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.969 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.970 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 8960484 kB' 'MemAvailable: 10550672 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451548 kB' 'Inactive: 1476100 kB' 'Active(anon): 131464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122836 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134268 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71932 kB' 'KernelStack: 6368 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.971 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.972 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.973 nr_hugepages=512 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:48.973 resv_hugepages=0 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.973 surplus_hugepages=0 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.973 anon_hugepages=0 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 8960624 kB' 'MemAvailable: 10550812 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451556 kB' 'Inactive: 1476100 kB' 'Active(anon): 131472 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122600 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134260 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71924 kB' 'KernelStack: 6352 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985292 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.973 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.974 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 8960624 kB' 'MemUsed: 3281328 kB' 'SwapCached: 0 kB' 'Active: 451532 kB' 'Inactive: 1476100 kB' 'Active(anon): 131448 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1806648 kB' 'Mapped: 48476 kB' 'AnonPages: 122836 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134260 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.975 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.976 node0=512 expecting 512 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.976 00:03:48.976 real 0m0.502s 00:03:48.976 user 0m0.262s 00:03:48.976 sys 0m0.271s 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.976 05:50:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.976 ************************************ 00:03:48.976 END TEST custom_alloc 00:03:48.976 ************************************ 00:03:48.976 05:50:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.976 05:50:04 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:48.976 05:50:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.976 05:50:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.976 05:50:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.976 ************************************ 00:03:48.976 START TEST no_shrink_alloc 00:03:48.976 ************************************ 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.976 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:48.977 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:48.977 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:48.977 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.977 05:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.500 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.500 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.500 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:49.500 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.500 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.500 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.500 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:49.500 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7913372 kB' 'MemAvailable: 9503560 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 452228 kB' 'Inactive: 1476100 kB' 'Active(anon): 132144 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123260 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134260 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71924 kB' 'KernelStack: 6340 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.501 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7913372 kB' 'MemAvailable: 9503560 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451840 kB' 'Inactive: 1476100 kB' 'Active(anon): 131756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122828 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134256 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71920 kB' 'KernelStack: 6320 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.502 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7913372 kB' 'MemAvailable: 9503560 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451588 kB' 'Inactive: 1476100 kB' 'Active(anon): 131504 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122612 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134256 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71920 kB' 'KernelStack: 6336 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.503 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.504 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:49.505 nr_hugepages=1024 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.505 resv_hugepages=0 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.505 surplus_hugepages=0 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.505 anon_hugepages=0 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7913372 kB' 'MemAvailable: 9503560 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451548 kB' 'Inactive: 1476100 kB' 'Active(anon): 131464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122836 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62336 kB' 'Slab: 134256 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71920 kB' 'KernelStack: 6320 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.505 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7913372 kB' 'MemUsed: 4328580 kB' 'SwapCached: 0 kB' 'Active: 451580 kB' 'Inactive: 1476100 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1806648 kB' 'Mapped: 48476 kB' 'AnonPages: 122896 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62336 kB' 'Slab: 134256 kB' 'SReclaimable: 62336 kB' 'SUnreclaim: 71920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.506 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.507 node0=1024 expecting 1024 00:03:49.507 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.508 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.508 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:49.508 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:49.508 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:49.508 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.508 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.767 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.767 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.767 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.767 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:49.767 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:49.767 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.767 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.767 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.767 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7916960 kB' 'MemAvailable: 9507136 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 452340 kB' 'Inactive: 1476100 kB' 'Active(anon): 132256 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123448 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 62308 kB' 'Slab: 134224 kB' 'SReclaimable: 62308 kB' 'SUnreclaim: 71916 kB' 'KernelStack: 6436 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.031 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.032 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7916960 kB' 'MemAvailable: 9507136 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451952 kB' 'Inactive: 1476100 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122972 kB' 'Mapped: 48648 kB' 'Shmem: 10464 kB' 'KReclaimable: 62308 kB' 'Slab: 134216 kB' 'SReclaimable: 62308 kB' 'SUnreclaim: 71908 kB' 'KernelStack: 6356 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.033 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7916708 kB' 'MemAvailable: 9506884 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451588 kB' 'Inactive: 1476100 kB' 'Active(anon): 131504 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122576 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62308 kB' 'Slab: 134224 kB' 'SReclaimable: 62308 kB' 'SUnreclaim: 71916 kB' 'KernelStack: 6320 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.034 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.035 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.036 nr_hugepages=1024 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.036 resv_hugepages=0 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.036 surplus_hugepages=0 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.036 anon_hugepages=0 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7916204 kB' 'MemAvailable: 9506380 kB' 'Buffers: 2436 kB' 'Cached: 1804212 kB' 'SwapCached: 0 kB' 'Active: 451580 kB' 'Inactive: 1476100 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122864 kB' 'Mapped: 48476 kB' 'Shmem: 10464 kB' 'KReclaimable: 62308 kB' 'Slab: 134220 kB' 'SReclaimable: 62308 kB' 'SUnreclaim: 71912 kB' 'KernelStack: 6336 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461004 kB' 'Committed_AS: 352308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.036 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.037 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241952 kB' 'MemFree: 7916204 kB' 'MemUsed: 4325748 kB' 'SwapCached: 0 kB' 'Active: 451824 kB' 'Inactive: 1476100 kB' 'Active(anon): 131740 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1476100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1806648 kB' 'Mapped: 48476 kB' 'AnonPages: 122904 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62308 kB' 'Slab: 134224 kB' 'SReclaimable: 62308 kB' 'SUnreclaim: 71916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.038 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.039 node0=1024 expecting 1024 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.039 00:03:50.039 real 0m1.011s 00:03:50.039 user 0m0.509s 00:03:50.039 sys 0m0.570s 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.039 05:50:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.039 ************************************ 00:03:50.039 END TEST no_shrink_alloc 00:03:50.039 ************************************ 00:03:50.039 05:50:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:50.040 05:50:05 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:50.040 05:50:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:50.040 05:50:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:50.040 05:50:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:50.040 05:50:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:50.040 05:50:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:50.040 05:50:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:50.040 05:50:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:50.040 05:50:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:50.040 00:03:50.040 real 0m4.540s 00:03:50.040 user 0m2.227s 00:03:50.040 sys 0m2.351s 00:03:50.040 05:50:05 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.040 ************************************ 00:03:50.040 END TEST hugepages 00:03:50.040 05:50:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.040 ************************************ 00:03:50.040 05:50:05 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:50.040 05:50:05 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:50.040 05:50:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.040 05:50:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.040 05:50:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.040 ************************************ 00:03:50.040 START TEST driver 00:03:50.040 ************************************ 00:03:50.040 05:50:05 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:50.298 * Looking for test storage... 00:03:50.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.298 05:50:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:50.298 05:50:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.298 05:50:06 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.865 05:50:06 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:50.865 05:50:06 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.865 05:50:06 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.865 05:50:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.865 ************************************ 00:03:50.865 START TEST guess_driver 00:03:50.865 ************************************ 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:50.865 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:50.865 Looking for driver=uio_pci_generic 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.865 05:50:06 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.447 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:51.447 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:51.447 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.447 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:51.447 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:51.447 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.711 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:51.711 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:51.711 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.711 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:51.711 05:50:07 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:51.711 05:50:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.711 05:50:07 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.278 00:03:52.278 real 0m1.392s 00:03:52.278 user 0m0.523s 00:03:52.278 sys 0m0.884s 00:03:52.278 05:50:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.278 05:50:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:52.278 ************************************ 00:03:52.278 END TEST guess_driver 00:03:52.278 ************************************ 00:03:52.278 05:50:08 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:52.278 00:03:52.278 real 0m2.077s 00:03:52.278 user 0m0.755s 00:03:52.278 sys 0m1.389s 00:03:52.278 05:50:08 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.278 05:50:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:52.278 ************************************ 00:03:52.278 END TEST driver 00:03:52.278 ************************************ 00:03:52.278 05:50:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:52.278 05:50:08 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:52.278 05:50:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.278 05:50:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.278 05:50:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.278 ************************************ 00:03:52.278 START TEST devices 00:03:52.278 ************************************ 00:03:52.278 05:50:08 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:52.278 * Looking for test storage... 00:03:52.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:52.278 05:50:08 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:52.278 05:50:08 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:52.278 05:50:08 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.278 05:50:08 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.214 05:50:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:53.214 05:50:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:53.214 05:50:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:53.214 No valid GPT data, bailing 00:03:53.214 05:50:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.214 05:50:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:53.214 05:50:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:53.214 05:50:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:53.214 05:50:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:53.214 05:50:08 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:53.214 05:50:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:53.215 05:50:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.215 05:50:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:53.215 05:50:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.215 05:50:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:53.215 05:50:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:53.215 05:50:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:53.215 05:50:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:53.215 05:50:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:53.215 05:50:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:53.215 05:50:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:53.215 No valid GPT data, bailing 00:03:53.215 05:50:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:53.215 05:50:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:53.215 05:50:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:53.215 05:50:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:53.215 05:50:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:53.215 05:50:09 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:53.215 05:50:09 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:53.215 05:50:09 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:53.215 05:50:09 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:53.215 No valid GPT data, bailing 00:03:53.215 05:50:09 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:53.215 05:50:09 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:53.215 05:50:09 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:53.215 05:50:09 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:53.215 05:50:09 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:53.215 05:50:09 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:53.215 05:50:09 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:53.215 05:50:09 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:53.215 05:50:09 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:53.215 No valid GPT data, bailing 00:03:53.473 05:50:09 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:53.473 05:50:09 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:53.473 05:50:09 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:53.473 05:50:09 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:53.473 05:50:09 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:53.473 05:50:09 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:53.473 05:50:09 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:53.473 05:50:09 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:53.473 05:50:09 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.473 05:50:09 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:53.473 05:50:09 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:53.473 05:50:09 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:53.473 05:50:09 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:53.473 05:50:09 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.473 05:50:09 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.473 05:50:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:53.473 ************************************ 00:03:53.473 START TEST nvme_mount 00:03:53.473 ************************************ 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:53.473 05:50:09 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:54.409 Creating new GPT entries in memory. 00:03:54.409 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:54.409 other utilities. 00:03:54.409 05:50:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:54.409 05:50:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.409 05:50:10 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:54.409 05:50:10 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:54.409 05:50:10 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:55.344 Creating new GPT entries in memory. 00:03:55.344 The operation has completed successfully. 00:03:55.344 05:50:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:55.344 05:50:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.344 05:50:11 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57706 00:03:55.344 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.344 05:50:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:55.344 05:50:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.344 05:50:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:55.344 05:50:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:55.344 05:50:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.603 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:55.862 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.862 05:50:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.121 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:56.121 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:56.121 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.121 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.121 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:56.121 05:50:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:56.121 05:50:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:56.381 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.644 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:56.644 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.644 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:56.644 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.644 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.644 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:56.644 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.644 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.644 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.644 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:56.929 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.217 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:57.217 05:50:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:57.217 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:57.217 00:03:57.217 real 0m3.949s 00:03:57.217 user 0m0.680s 00:03:57.217 sys 0m0.994s 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.217 ************************************ 00:03:57.217 05:50:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:57.217 END TEST nvme_mount 00:03:57.217 ************************************ 00:03:57.489 05:50:13 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:57.489 05:50:13 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:57.489 05:50:13 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.489 05:50:13 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.489 05:50:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:57.489 ************************************ 00:03:57.489 START TEST dm_mount 00:03:57.489 ************************************ 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:57.489 05:50:13 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:58.427 Creating new GPT entries in memory. 00:03:58.427 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:58.427 other utilities. 00:03:58.427 05:50:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:58.427 05:50:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.427 05:50:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.427 05:50:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.427 05:50:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:59.363 Creating new GPT entries in memory. 00:03:59.363 The operation has completed successfully. 00:03:59.363 05:50:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.363 05:50:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.363 05:50:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.363 05:50:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.363 05:50:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:00.741 The operation has completed successfully. 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 58135 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.741 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:00.742 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.001 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:01.002 05:50:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:01.002 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.002 05:50:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.287 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.287 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:01.287 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:01.287 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.287 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.287 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.287 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.287 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:01.546 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:01.546 00:04:01.546 real 0m4.198s 00:04:01.546 user 0m0.453s 00:04:01.546 sys 0m0.697s 00:04:01.546 ************************************ 00:04:01.546 END TEST dm_mount 00:04:01.546 ************************************ 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.546 05:50:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:01.546 05:50:17 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:01.546 05:50:17 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:01.546 05:50:17 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:01.546 05:50:17 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.546 05:50:17 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.546 05:50:17 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:01.546 05:50:17 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.546 05:50:17 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.805 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:01.805 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:01.805 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:01.805 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:01.805 05:50:17 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:01.805 05:50:17 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.805 05:50:17 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:01.805 05:50:17 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.805 05:50:17 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:01.805 05:50:17 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.805 05:50:17 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:01.805 ************************************ 00:04:01.805 END TEST devices 00:04:01.805 ************************************ 00:04:01.805 00:04:01.805 real 0m9.647s 00:04:01.805 user 0m1.750s 00:04:01.805 sys 0m2.266s 00:04:01.805 05:50:17 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.805 05:50:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:02.063 05:50:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:02.063 00:04:02.063 real 0m21.122s 00:04:02.063 user 0m6.869s 00:04:02.063 sys 0m8.656s 00:04:02.063 05:50:17 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.063 ************************************ 00:04:02.063 END TEST setup.sh 00:04:02.063 05:50:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.063 ************************************ 00:04:02.063 05:50:17 -- common/autotest_common.sh@1142 -- # return 0 00:04:02.063 05:50:17 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:02.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.629 Hugepages 00:04:02.629 node hugesize free / total 00:04:02.629 node0 1048576kB 0 / 0 00:04:02.629 node0 2048kB 2048 / 2048 00:04:02.629 00:04:02.629 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.629 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:02.895 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:02.895 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:02.895 05:50:18 -- spdk/autotest.sh@130 -- # uname -s 00:04:02.895 05:50:18 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:02.895 05:50:18 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:02.895 05:50:18 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.461 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.719 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.719 05:50:19 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:04.655 05:50:20 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:04.655 05:50:20 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:04.655 05:50:20 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:04.655 05:50:20 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:04.655 05:50:20 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:04.655 05:50:20 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:04.655 05:50:20 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.655 05:50:20 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.655 05:50:20 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:04.655 05:50:20 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:04.655 05:50:20 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:04.655 05:50:20 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.224 Waiting for block devices as requested 00:04:05.224 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.224 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.224 05:50:21 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:05.224 05:50:21 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:05.224 05:50:21 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:05.224 05:50:21 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:05.224 05:50:21 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:05.224 05:50:21 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:05.224 05:50:21 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:05.224 05:50:21 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:05.224 05:50:21 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:05.224 05:50:21 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:05.224 05:50:21 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:05.224 05:50:21 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:05.224 05:50:21 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:05.224 05:50:21 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:05.224 05:50:21 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:05.224 05:50:21 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:05.224 05:50:21 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:05.224 05:50:21 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:05.224 05:50:21 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:05.224 05:50:21 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:05.224 05:50:21 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:05.224 05:50:21 -- common/autotest_common.sh@1557 -- # continue 00:04:05.224 05:50:21 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:05.224 05:50:21 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:05.224 05:50:21 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:05.224 05:50:21 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:05.224 05:50:21 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:05.224 05:50:21 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:05.225 05:50:21 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:05.225 05:50:21 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:05.225 05:50:21 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:05.225 05:50:21 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:05.484 05:50:21 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:05.484 05:50:21 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:05.484 05:50:21 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:05.484 05:50:21 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:05.484 05:50:21 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:05.484 05:50:21 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:05.484 05:50:21 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:05.484 05:50:21 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:05.484 05:50:21 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:05.484 05:50:21 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:05.484 05:50:21 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:05.484 05:50:21 -- common/autotest_common.sh@1557 -- # continue 00:04:05.484 05:50:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:05.484 05:50:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:05.484 05:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:05.484 05:50:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:05.484 05:50:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:05.484 05:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:05.484 05:50:21 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.051 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.051 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.311 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.311 05:50:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:06.311 05:50:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:06.311 05:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.311 05:50:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:06.311 05:50:22 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:06.311 05:50:22 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:06.311 05:50:22 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:06.311 05:50:22 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:06.311 05:50:22 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:06.311 05:50:22 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:06.311 05:50:22 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:06.311 05:50:22 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.311 05:50:22 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:06.311 05:50:22 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:06.311 05:50:22 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:06.311 05:50:22 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:06.311 05:50:22 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:06.311 05:50:22 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:06.311 05:50:22 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:06.311 05:50:22 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:06.311 05:50:22 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:06.311 05:50:22 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:06.311 05:50:22 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:06.311 05:50:22 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:06.311 05:50:22 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:06.311 05:50:22 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:06.311 05:50:22 -- common/autotest_common.sh@1593 -- # return 0 00:04:06.311 05:50:22 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:06.311 05:50:22 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:06.311 05:50:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:06.311 05:50:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:06.311 05:50:22 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:06.311 05:50:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:06.311 05:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.311 05:50:22 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:06.311 05:50:22 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:06.311 05:50:22 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:06.311 05:50:22 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.311 05:50:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.311 05:50:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.311 05:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.311 ************************************ 00:04:06.311 START TEST env 00:04:06.311 ************************************ 00:04:06.311 05:50:22 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.570 * Looking for test storage... 00:04:06.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:06.570 05:50:22 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.570 05:50:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.570 05:50:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.570 05:50:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.570 ************************************ 00:04:06.570 START TEST env_memory 00:04:06.570 ************************************ 00:04:06.570 05:50:22 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.570 00:04:06.570 00:04:06.570 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.570 http://cunit.sourceforge.net/ 00:04:06.570 00:04:06.570 00:04:06.570 Suite: memory 00:04:06.570 Test: alloc and free memory map ...[2024-07-11 05:50:22.353204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:06.570 passed 00:04:06.570 Test: mem map translation ...[2024-07-11 05:50:22.414282] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:06.570 [2024-07-11 05:50:22.414361] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:06.570 [2024-07-11 05:50:22.414458] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:06.570 [2024-07-11 05:50:22.414490] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:06.830 passed 00:04:06.830 Test: mem map registration ...[2024-07-11 05:50:22.512870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:06.830 [2024-07-11 05:50:22.512938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:06.830 passed 00:04:06.830 Test: mem map adjacent registrations ...passed 00:04:06.830 00:04:06.830 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.830 suites 1 1 n/a 0 0 00:04:06.830 tests 4 4 4 0 0 00:04:06.830 asserts 152 152 152 0 n/a 00:04:06.830 00:04:06.830 Elapsed time = 0.342 seconds 00:04:06.830 00:04:06.830 real 0m0.382s 00:04:06.830 user 0m0.337s 00:04:06.830 sys 0m0.037s 00:04:06.830 05:50:22 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.830 05:50:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:06.830 ************************************ 00:04:06.830 END TEST env_memory 00:04:06.830 ************************************ 00:04:06.830 05:50:22 env -- common/autotest_common.sh@1142 -- # return 0 00:04:06.830 05:50:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:06.830 05:50:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.830 05:50:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.830 05:50:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.830 ************************************ 00:04:06.830 START TEST env_vtophys 00:04:06.830 ************************************ 00:04:06.830 05:50:22 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.090 EAL: lib.eal log level changed from notice to debug 00:04:07.090 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.090 EAL: Detected lcore 1 as core 0 on socket 0 00:04:07.090 EAL: Detected lcore 2 as core 0 on socket 0 00:04:07.090 EAL: Detected lcore 3 as core 0 on socket 0 00:04:07.090 EAL: Detected lcore 4 as core 0 on socket 0 00:04:07.090 EAL: Detected lcore 5 as core 0 on socket 0 00:04:07.090 EAL: Detected lcore 6 as core 0 on socket 0 00:04:07.090 EAL: Detected lcore 7 as core 0 on socket 0 00:04:07.090 EAL: Detected lcore 8 as core 0 on socket 0 00:04:07.090 EAL: Detected lcore 9 as core 0 on socket 0 00:04:07.090 EAL: Maximum logical cores by configuration: 128 00:04:07.090 EAL: Detected CPU lcores: 10 00:04:07.090 EAL: Detected NUMA nodes: 1 00:04:07.090 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:07.090 EAL: Detected shared linkage of DPDK 00:04:07.090 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.090 EAL: Selected IOVA mode 'PA' 00:04:07.090 EAL: Probing VFIO support... 00:04:07.090 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.090 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:07.090 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.090 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.090 EAL: Setting up physically contiguous memory... 00:04:07.090 EAL: Setting maximum number of open files to 524288 00:04:07.090 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.090 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.090 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.090 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.090 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.090 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.090 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.090 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.090 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.090 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.090 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.090 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.090 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.090 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.090 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.090 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.090 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.090 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.090 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.090 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.090 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.090 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.090 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.090 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.090 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.090 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.090 EAL: Hugepages will be freed exactly as allocated. 00:04:07.090 EAL: No shared files mode enabled, IPC is disabled 00:04:07.090 EAL: No shared files mode enabled, IPC is disabled 00:04:07.090 EAL: TSC frequency is ~2200000 KHz 00:04:07.090 EAL: Main lcore 0 is ready (tid=7f900cc0fa40;cpuset=[0]) 00:04:07.090 EAL: Trying to obtain current memory policy. 00:04:07.090 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.090 EAL: Restoring previous memory policy: 0 00:04:07.090 EAL: request: mp_malloc_sync 00:04:07.090 EAL: No shared files mode enabled, IPC is disabled 00:04:07.090 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.090 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.090 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.090 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.090 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:07.090 00:04:07.090 00:04:07.090 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.090 http://cunit.sourceforge.net/ 00:04:07.090 00:04:07.090 00:04:07.090 Suite: components_suite 00:04:07.659 Test: vtophys_malloc_test ...passed 00:04:07.659 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.659 EAL: Restoring previous memory policy: 4 00:04:07.659 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.659 EAL: request: mp_malloc_sync 00:04:07.659 EAL: No shared files mode enabled, IPC is disabled 00:04:07.659 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.659 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.659 EAL: request: mp_malloc_sync 00:04:07.659 EAL: No shared files mode enabled, IPC is disabled 00:04:07.659 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.659 EAL: Trying to obtain current memory policy. 00:04:07.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.659 EAL: Restoring previous memory policy: 4 00:04:07.659 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.659 EAL: request: mp_malloc_sync 00:04:07.659 EAL: No shared files mode enabled, IPC is disabled 00:04:07.659 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.659 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.659 EAL: request: mp_malloc_sync 00:04:07.659 EAL: No shared files mode enabled, IPC is disabled 00:04:07.659 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.659 EAL: Trying to obtain current memory policy. 00:04:07.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.659 EAL: Restoring previous memory policy: 4 00:04:07.659 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.659 EAL: request: mp_malloc_sync 00:04:07.659 EAL: No shared files mode enabled, IPC is disabled 00:04:07.659 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.659 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.659 EAL: request: mp_malloc_sync 00:04:07.659 EAL: No shared files mode enabled, IPC is disabled 00:04:07.659 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.659 EAL: Trying to obtain current memory policy. 00:04:07.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.659 EAL: Restoring previous memory policy: 4 00:04:07.659 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.659 EAL: request: mp_malloc_sync 00:04:07.659 EAL: No shared files mode enabled, IPC is disabled 00:04:07.659 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.659 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.659 EAL: request: mp_malloc_sync 00:04:07.659 EAL: No shared files mode enabled, IPC is disabled 00:04:07.659 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.659 EAL: Trying to obtain current memory policy. 00:04:07.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.659 EAL: Restoring previous memory policy: 4 00:04:07.659 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.660 EAL: request: mp_malloc_sync 00:04:07.660 EAL: No shared files mode enabled, IPC is disabled 00:04:07.660 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.660 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.660 EAL: request: mp_malloc_sync 00:04:07.660 EAL: No shared files mode enabled, IPC is disabled 00:04:07.660 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.660 EAL: Trying to obtain current memory policy. 00:04:07.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.660 EAL: Restoring previous memory policy: 4 00:04:07.660 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.660 EAL: request: mp_malloc_sync 00:04:07.660 EAL: No shared files mode enabled, IPC is disabled 00:04:07.660 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.660 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.660 EAL: request: mp_malloc_sync 00:04:07.660 EAL: No shared files mode enabled, IPC is disabled 00:04:07.660 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.918 EAL: Trying to obtain current memory policy. 00:04:07.918 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.918 EAL: Restoring previous memory policy: 4 00:04:07.918 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.918 EAL: request: mp_malloc_sync 00:04:07.918 EAL: No shared files mode enabled, IPC is disabled 00:04:07.918 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.918 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.175 EAL: request: mp_malloc_sync 00:04:08.175 EAL: No shared files mode enabled, IPC is disabled 00:04:08.175 EAL: Heap on socket 0 was shrunk by 130MB 00:04:08.175 EAL: Trying to obtain current memory policy. 00:04:08.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.175 EAL: Restoring previous memory policy: 4 00:04:08.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.175 EAL: request: mp_malloc_sync 00:04:08.175 EAL: No shared files mode enabled, IPC is disabled 00:04:08.175 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.741 EAL: request: mp_malloc_sync 00:04:08.741 EAL: No shared files mode enabled, IPC is disabled 00:04:08.741 EAL: Heap on socket 0 was shrunk by 258MB 00:04:09.000 EAL: Trying to obtain current memory policy. 00:04:09.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.000 EAL: Restoring previous memory policy: 4 00:04:09.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.000 EAL: request: mp_malloc_sync 00:04:09.000 EAL: No shared files mode enabled, IPC is disabled 00:04:09.000 EAL: Heap on socket 0 was expanded by 514MB 00:04:09.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.827 EAL: request: mp_malloc_sync 00:04:09.827 EAL: No shared files mode enabled, IPC is disabled 00:04:09.827 EAL: Heap on socket 0 was shrunk by 514MB 00:04:10.394 EAL: Trying to obtain current memory policy. 00:04:10.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.394 EAL: Restoring previous memory policy: 4 00:04:10.394 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.394 EAL: request: mp_malloc_sync 00:04:10.394 EAL: No shared files mode enabled, IPC is disabled 00:04:10.394 EAL: Heap on socket 0 was expanded by 1026MB 00:04:11.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.770 EAL: request: mp_malloc_sync 00:04:11.770 EAL: No shared files mode enabled, IPC is disabled 00:04:11.770 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:13.147 passed 00:04:13.147 00:04:13.147 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.147 suites 1 1 n/a 0 0 00:04:13.147 tests 2 2 2 0 0 00:04:13.147 asserts 5432 5432 5432 0 n/a 00:04:13.147 00:04:13.147 Elapsed time = 5.820 seconds 00:04:13.147 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.147 EAL: request: mp_malloc_sync 00:04:13.147 EAL: No shared files mode enabled, IPC is disabled 00:04:13.147 EAL: Heap on socket 0 was shrunk by 2MB 00:04:13.147 EAL: No shared files mode enabled, IPC is disabled 00:04:13.147 EAL: No shared files mode enabled, IPC is disabled 00:04:13.147 EAL: No shared files mode enabled, IPC is disabled 00:04:13.147 ************************************ 00:04:13.147 END TEST env_vtophys 00:04:13.147 ************************************ 00:04:13.147 00:04:13.147 real 0m6.119s 00:04:13.147 user 0m5.289s 00:04:13.147 sys 0m0.677s 00:04:13.147 05:50:28 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.147 05:50:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:13.147 05:50:28 env -- common/autotest_common.sh@1142 -- # return 0 00:04:13.147 05:50:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.147 05:50:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.147 05:50:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.147 05:50:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.147 ************************************ 00:04:13.147 START TEST env_pci 00:04:13.147 ************************************ 00:04:13.147 05:50:28 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.147 00:04:13.147 00:04:13.147 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.147 http://cunit.sourceforge.net/ 00:04:13.147 00:04:13.147 00:04:13.147 Suite: pci 00:04:13.147 Test: pci_hook ...[2024-07-11 05:50:28.924983] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59378 has claimed it 00:04:13.147 passed 00:04:13.147 00:04:13.147 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.147 suites 1 1 n/a 0 0 00:04:13.147 tests 1 1 1 0 0 00:04:13.147 asserts 25 25 25 0 n/a 00:04:13.147 00:04:13.147 Elapsed time = 0.006 seconds 00:04:13.147 EAL: Cannot find device (10000:00:01.0) 00:04:13.147 EAL: Failed to attach device on primary process 00:04:13.147 00:04:13.147 real 0m0.078s 00:04:13.147 user 0m0.043s 00:04:13.147 sys 0m0.034s 00:04:13.147 05:50:28 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.147 ************************************ 00:04:13.147 END TEST env_pci 00:04:13.147 ************************************ 00:04:13.147 05:50:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:13.147 05:50:29 env -- common/autotest_common.sh@1142 -- # return 0 00:04:13.147 05:50:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:13.147 05:50:29 env -- env/env.sh@15 -- # uname 00:04:13.147 05:50:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:13.147 05:50:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:13.147 05:50:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.147 05:50:29 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:13.147 05:50:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.147 05:50:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.147 ************************************ 00:04:13.147 START TEST env_dpdk_post_init 00:04:13.147 ************************************ 00:04:13.147 05:50:29 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.406 EAL: Detected CPU lcores: 10 00:04:13.406 EAL: Detected NUMA nodes: 1 00:04:13.406 EAL: Detected shared linkage of DPDK 00:04:13.406 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.406 EAL: Selected IOVA mode 'PA' 00:04:13.406 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.406 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:13.406 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:13.406 Starting DPDK initialization... 00:04:13.406 Starting SPDK post initialization... 00:04:13.406 SPDK NVMe probe 00:04:13.406 Attaching to 0000:00:10.0 00:04:13.406 Attaching to 0000:00:11.0 00:04:13.406 Attached to 0000:00:10.0 00:04:13.406 Attached to 0000:00:11.0 00:04:13.406 Cleaning up... 00:04:13.406 ************************************ 00:04:13.406 END TEST env_dpdk_post_init 00:04:13.406 ************************************ 00:04:13.406 00:04:13.406 real 0m0.269s 00:04:13.406 user 0m0.082s 00:04:13.406 sys 0m0.087s 00:04:13.406 05:50:29 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.406 05:50:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.664 05:50:29 env -- common/autotest_common.sh@1142 -- # return 0 00:04:13.664 05:50:29 env -- env/env.sh@26 -- # uname 00:04:13.664 05:50:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:13.664 05:50:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.664 05:50:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.664 05:50:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.664 05:50:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.664 ************************************ 00:04:13.664 START TEST env_mem_callbacks 00:04:13.664 ************************************ 00:04:13.664 05:50:29 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.664 EAL: Detected CPU lcores: 10 00:04:13.664 EAL: Detected NUMA nodes: 1 00:04:13.664 EAL: Detected shared linkage of DPDK 00:04:13.664 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.664 EAL: Selected IOVA mode 'PA' 00:04:13.664 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.664 00:04:13.664 00:04:13.664 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.664 http://cunit.sourceforge.net/ 00:04:13.664 00:04:13.664 00:04:13.664 Suite: memory 00:04:13.664 Test: test ... 00:04:13.664 register 0x200000200000 2097152 00:04:13.664 malloc 3145728 00:04:13.664 register 0x200000400000 4194304 00:04:13.664 buf 0x2000004fffc0 len 3145728 PASSED 00:04:13.664 malloc 64 00:04:13.664 buf 0x2000004ffec0 len 64 PASSED 00:04:13.664 malloc 4194304 00:04:13.664 register 0x200000800000 6291456 00:04:13.664 buf 0x2000009fffc0 len 4194304 PASSED 00:04:13.664 free 0x2000004fffc0 3145728 00:04:13.664 free 0x2000004ffec0 64 00:04:13.664 unregister 0x200000400000 4194304 PASSED 00:04:13.665 free 0x2000009fffc0 4194304 00:04:13.665 unregister 0x200000800000 6291456 PASSED 00:04:13.665 malloc 8388608 00:04:13.665 register 0x200000400000 10485760 00:04:13.665 buf 0x2000005fffc0 len 8388608 PASSED 00:04:13.665 free 0x2000005fffc0 8388608 00:04:13.665 unregister 0x200000400000 10485760 PASSED 00:04:13.923 passed 00:04:13.923 00:04:13.923 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.923 suites 1 1 n/a 0 0 00:04:13.923 tests 1 1 1 0 0 00:04:13.923 asserts 15 15 15 0 n/a 00:04:13.923 00:04:13.923 Elapsed time = 0.054 seconds 00:04:13.923 00:04:13.923 real 0m0.255s 00:04:13.923 user 0m0.094s 00:04:13.923 sys 0m0.059s 00:04:13.923 ************************************ 00:04:13.923 END TEST env_mem_callbacks 00:04:13.923 ************************************ 00:04:13.923 05:50:29 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.923 05:50:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.923 05:50:29 env -- common/autotest_common.sh@1142 -- # return 0 00:04:13.923 00:04:13.924 real 0m7.460s 00:04:13.924 user 0m5.965s 00:04:13.924 sys 0m1.104s 00:04:13.924 05:50:29 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.924 05:50:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.924 ************************************ 00:04:13.924 END TEST env 00:04:13.924 ************************************ 00:04:13.924 05:50:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:13.924 05:50:29 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.924 05:50:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.924 05:50:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.924 05:50:29 -- common/autotest_common.sh@10 -- # set +x 00:04:13.924 ************************************ 00:04:13.924 START TEST rpc 00:04:13.924 ************************************ 00:04:13.924 05:50:29 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.924 * Looking for test storage... 00:04:13.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.924 05:50:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59492 00:04:13.924 05:50:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:13.924 05:50:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.924 05:50:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59492 00:04:13.924 05:50:29 rpc -- common/autotest_common.sh@829 -- # '[' -z 59492 ']' 00:04:13.924 05:50:29 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.924 05:50:29 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.924 05:50:29 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.924 05:50:29 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.924 05:50:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.183 [2024-07-11 05:50:29.886595] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:14.183 [2024-07-11 05:50:29.886815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59492 ] 00:04:14.183 [2024-07-11 05:50:30.046225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.441 [2024-07-11 05:50:30.201516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:14.442 [2024-07-11 05:50:30.201638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59492' to capture a snapshot of events at runtime. 00:04:14.442 [2024-07-11 05:50:30.201699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:14.442 [2024-07-11 05:50:30.201717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:14.442 [2024-07-11 05:50:30.201735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59492 for offline analysis/debug. 00:04:14.442 [2024-07-11 05:50:30.201780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.442 [2024-07-11 05:50:30.357157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:15.009 05:50:30 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:15.009 05:50:30 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:15.009 05:50:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.009 05:50:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.009 05:50:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:15.009 05:50:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:15.009 05:50:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.009 05:50:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.009 05:50:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.009 ************************************ 00:04:15.009 START TEST rpc_integrity 00:04:15.009 ************************************ 00:04:15.009 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:15.009 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.009 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.009 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.009 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.009 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.009 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.009 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.009 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.009 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.009 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.009 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.009 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:15.009 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.009 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.009 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.268 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.268 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.268 { 00:04:15.268 "name": "Malloc0", 00:04:15.268 "aliases": [ 00:04:15.268 "33760d26-bf8f-42b4-931e-c650e8772342" 00:04:15.268 ], 00:04:15.268 "product_name": "Malloc disk", 00:04:15.268 "block_size": 512, 00:04:15.268 "num_blocks": 16384, 00:04:15.268 "uuid": "33760d26-bf8f-42b4-931e-c650e8772342", 00:04:15.268 "assigned_rate_limits": { 00:04:15.268 "rw_ios_per_sec": 0, 00:04:15.268 "rw_mbytes_per_sec": 0, 00:04:15.268 "r_mbytes_per_sec": 0, 00:04:15.268 "w_mbytes_per_sec": 0 00:04:15.268 }, 00:04:15.268 "claimed": false, 00:04:15.268 "zoned": false, 00:04:15.268 "supported_io_types": { 00:04:15.268 "read": true, 00:04:15.268 "write": true, 00:04:15.268 "unmap": true, 00:04:15.268 "flush": true, 00:04:15.268 "reset": true, 00:04:15.268 "nvme_admin": false, 00:04:15.268 "nvme_io": false, 00:04:15.268 "nvme_io_md": false, 00:04:15.268 "write_zeroes": true, 00:04:15.268 "zcopy": true, 00:04:15.268 "get_zone_info": false, 00:04:15.268 "zone_management": false, 00:04:15.268 "zone_append": false, 00:04:15.268 "compare": false, 00:04:15.268 "compare_and_write": false, 00:04:15.268 "abort": true, 00:04:15.268 "seek_hole": false, 00:04:15.268 "seek_data": false, 00:04:15.268 "copy": true, 00:04:15.268 "nvme_iov_md": false 00:04:15.268 }, 00:04:15.268 "memory_domains": [ 00:04:15.268 { 00:04:15.268 "dma_device_id": "system", 00:04:15.268 "dma_device_type": 1 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.268 "dma_device_type": 2 00:04:15.268 } 00:04:15.268 ], 00:04:15.268 "driver_specific": {} 00:04:15.268 } 00:04:15.268 ]' 00:04:15.268 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.268 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.268 05:50:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:15.268 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.268 05:50:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.268 [2024-07-11 05:50:30.998987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:15.268 [2024-07-11 05:50:30.999139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.268 [2024-07-11 05:50:30.999185] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:04:15.268 [2024-07-11 05:50:30.999217] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.268 [2024-07-11 05:50:31.002183] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.268 [2024-07-11 05:50:31.002268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.268 Passthru0 00:04:15.268 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.268 05:50:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.268 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.268 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.268 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.268 05:50:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.268 { 00:04:15.268 "name": "Malloc0", 00:04:15.268 "aliases": [ 00:04:15.268 "33760d26-bf8f-42b4-931e-c650e8772342" 00:04:15.268 ], 00:04:15.268 "product_name": "Malloc disk", 00:04:15.268 "block_size": 512, 00:04:15.268 "num_blocks": 16384, 00:04:15.268 "uuid": "33760d26-bf8f-42b4-931e-c650e8772342", 00:04:15.268 "assigned_rate_limits": { 00:04:15.268 "rw_ios_per_sec": 0, 00:04:15.268 "rw_mbytes_per_sec": 0, 00:04:15.268 "r_mbytes_per_sec": 0, 00:04:15.268 "w_mbytes_per_sec": 0 00:04:15.268 }, 00:04:15.268 "claimed": true, 00:04:15.268 "claim_type": "exclusive_write", 00:04:15.268 "zoned": false, 00:04:15.268 "supported_io_types": { 00:04:15.268 "read": true, 00:04:15.268 "write": true, 00:04:15.268 "unmap": true, 00:04:15.268 "flush": true, 00:04:15.268 "reset": true, 00:04:15.268 "nvme_admin": false, 00:04:15.268 "nvme_io": false, 00:04:15.268 "nvme_io_md": false, 00:04:15.268 "write_zeroes": true, 00:04:15.268 "zcopy": true, 00:04:15.268 "get_zone_info": false, 00:04:15.268 "zone_management": false, 00:04:15.268 "zone_append": false, 00:04:15.268 "compare": false, 00:04:15.268 "compare_and_write": false, 00:04:15.268 "abort": true, 00:04:15.268 "seek_hole": false, 00:04:15.268 "seek_data": false, 00:04:15.268 "copy": true, 00:04:15.268 "nvme_iov_md": false 00:04:15.268 }, 00:04:15.268 "memory_domains": [ 00:04:15.268 { 00:04:15.268 "dma_device_id": "system", 00:04:15.268 "dma_device_type": 1 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.268 "dma_device_type": 2 00:04:15.268 } 00:04:15.268 ], 00:04:15.268 "driver_specific": {} 00:04:15.268 }, 00:04:15.268 { 00:04:15.268 "name": "Passthru0", 00:04:15.268 "aliases": [ 00:04:15.268 "3cc4c00f-2d85-5f72-ab1d-83886c8a0de9" 00:04:15.268 ], 00:04:15.268 "product_name": "passthru", 00:04:15.268 "block_size": 512, 00:04:15.268 "num_blocks": 16384, 00:04:15.268 "uuid": "3cc4c00f-2d85-5f72-ab1d-83886c8a0de9", 00:04:15.268 "assigned_rate_limits": { 00:04:15.268 "rw_ios_per_sec": 0, 00:04:15.268 "rw_mbytes_per_sec": 0, 00:04:15.268 "r_mbytes_per_sec": 0, 00:04:15.268 "w_mbytes_per_sec": 0 00:04:15.268 }, 00:04:15.268 "claimed": false, 00:04:15.268 "zoned": false, 00:04:15.268 "supported_io_types": { 00:04:15.268 "read": true, 00:04:15.268 "write": true, 00:04:15.268 "unmap": true, 00:04:15.268 "flush": true, 00:04:15.268 "reset": true, 00:04:15.268 "nvme_admin": false, 00:04:15.268 "nvme_io": false, 00:04:15.268 "nvme_io_md": false, 00:04:15.268 "write_zeroes": true, 00:04:15.268 "zcopy": true, 00:04:15.268 "get_zone_info": false, 00:04:15.268 "zone_management": false, 00:04:15.268 "zone_append": false, 00:04:15.269 "compare": false, 00:04:15.269 "compare_and_write": false, 00:04:15.269 "abort": true, 00:04:15.269 "seek_hole": false, 00:04:15.269 "seek_data": false, 00:04:15.269 "copy": true, 00:04:15.269 "nvme_iov_md": false 00:04:15.269 }, 00:04:15.269 "memory_domains": [ 00:04:15.269 { 00:04:15.269 "dma_device_id": "system", 00:04:15.269 "dma_device_type": 1 00:04:15.269 }, 00:04:15.269 { 00:04:15.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.269 "dma_device_type": 2 00:04:15.269 } 00:04:15.269 ], 00:04:15.269 "driver_specific": { 00:04:15.269 "passthru": { 00:04:15.269 "name": "Passthru0", 00:04:15.269 "base_bdev_name": "Malloc0" 00:04:15.269 } 00:04:15.269 } 00:04:15.269 } 00:04:15.269 ]' 00:04:15.269 05:50:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.269 05:50:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.269 05:50:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.269 05:50:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.269 05:50:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.269 05:50:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.269 05:50:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.269 05:50:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.269 00:04:15.269 real 0m0.343s 00:04:15.269 user 0m0.215s 00:04:15.269 sys 0m0.038s 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.269 ************************************ 00:04:15.269 END TEST rpc_integrity 00:04:15.269 ************************************ 00:04:15.269 05:50:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.577 05:50:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.577 05:50:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:15.577 05:50:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.577 05:50:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.577 05:50:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.577 ************************************ 00:04:15.577 START TEST rpc_plugins 00:04:15.577 ************************************ 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:15.577 { 00:04:15.577 "name": "Malloc1", 00:04:15.577 "aliases": [ 00:04:15.577 "a554a50b-6646-46c3-97c5-9c0aa827551a" 00:04:15.577 ], 00:04:15.577 "product_name": "Malloc disk", 00:04:15.577 "block_size": 4096, 00:04:15.577 "num_blocks": 256, 00:04:15.577 "uuid": "a554a50b-6646-46c3-97c5-9c0aa827551a", 00:04:15.577 "assigned_rate_limits": { 00:04:15.577 "rw_ios_per_sec": 0, 00:04:15.577 "rw_mbytes_per_sec": 0, 00:04:15.577 "r_mbytes_per_sec": 0, 00:04:15.577 "w_mbytes_per_sec": 0 00:04:15.577 }, 00:04:15.577 "claimed": false, 00:04:15.577 "zoned": false, 00:04:15.577 "supported_io_types": { 00:04:15.577 "read": true, 00:04:15.577 "write": true, 00:04:15.577 "unmap": true, 00:04:15.577 "flush": true, 00:04:15.577 "reset": true, 00:04:15.577 "nvme_admin": false, 00:04:15.577 "nvme_io": false, 00:04:15.577 "nvme_io_md": false, 00:04:15.577 "write_zeroes": true, 00:04:15.577 "zcopy": true, 00:04:15.577 "get_zone_info": false, 00:04:15.577 "zone_management": false, 00:04:15.577 "zone_append": false, 00:04:15.577 "compare": false, 00:04:15.577 "compare_and_write": false, 00:04:15.577 "abort": true, 00:04:15.577 "seek_hole": false, 00:04:15.577 "seek_data": false, 00:04:15.577 "copy": true, 00:04:15.577 "nvme_iov_md": false 00:04:15.577 }, 00:04:15.577 "memory_domains": [ 00:04:15.577 { 00:04:15.577 "dma_device_id": "system", 00:04:15.577 "dma_device_type": 1 00:04:15.577 }, 00:04:15.577 { 00:04:15.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.577 "dma_device_type": 2 00:04:15.577 } 00:04:15.577 ], 00:04:15.577 "driver_specific": {} 00:04:15.577 } 00:04:15.577 ]' 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:15.577 05:50:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:15.577 00:04:15.577 real 0m0.164s 00:04:15.577 user 0m0.108s 00:04:15.577 sys 0m0.022s 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.577 05:50:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.577 ************************************ 00:04:15.577 END TEST rpc_plugins 00:04:15.577 ************************************ 00:04:15.577 05:50:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.577 05:50:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:15.577 05:50:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.577 05:50:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.577 05:50:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.577 ************************************ 00:04:15.577 START TEST rpc_trace_cmd_test 00:04:15.577 ************************************ 00:04:15.577 05:50:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:15.577 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:15.577 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:15.577 05:50:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.577 05:50:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:15.865 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59492", 00:04:15.865 "tpoint_group_mask": "0x8", 00:04:15.865 "iscsi_conn": { 00:04:15.865 "mask": "0x2", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "scsi": { 00:04:15.865 "mask": "0x4", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "bdev": { 00:04:15.865 "mask": "0x8", 00:04:15.865 "tpoint_mask": "0xffffffffffffffff" 00:04:15.865 }, 00:04:15.865 "nvmf_rdma": { 00:04:15.865 "mask": "0x10", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "nvmf_tcp": { 00:04:15.865 "mask": "0x20", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "ftl": { 00:04:15.865 "mask": "0x40", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "blobfs": { 00:04:15.865 "mask": "0x80", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "dsa": { 00:04:15.865 "mask": "0x200", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "thread": { 00:04:15.865 "mask": "0x400", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "nvme_pcie": { 00:04:15.865 "mask": "0x800", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "iaa": { 00:04:15.865 "mask": "0x1000", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "nvme_tcp": { 00:04:15.865 "mask": "0x2000", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "bdev_nvme": { 00:04:15.865 "mask": "0x4000", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 }, 00:04:15.865 "sock": { 00:04:15.865 "mask": "0x8000", 00:04:15.865 "tpoint_mask": "0x0" 00:04:15.865 } 00:04:15.865 }' 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:15.865 00:04:15.865 real 0m0.265s 00:04:15.865 user 0m0.229s 00:04:15.865 sys 0m0.027s 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.865 ************************************ 00:04:15.865 END TEST rpc_trace_cmd_test 00:04:15.865 05:50:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.865 ************************************ 00:04:15.865 05:50:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.865 05:50:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:15.865 05:50:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:15.865 05:50:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:15.865 05:50:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.865 05:50:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.865 05:50:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.865 ************************************ 00:04:15.865 START TEST rpc_daemon_integrity 00:04:15.865 ************************************ 00:04:15.865 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:15.865 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.865 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.865 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.865 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.865 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.865 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.125 { 00:04:16.125 "name": "Malloc2", 00:04:16.125 "aliases": [ 00:04:16.125 "8502c481-3335-4b60-a07d-36b20de29e66" 00:04:16.125 ], 00:04:16.125 "product_name": "Malloc disk", 00:04:16.125 "block_size": 512, 00:04:16.125 "num_blocks": 16384, 00:04:16.125 "uuid": "8502c481-3335-4b60-a07d-36b20de29e66", 00:04:16.125 "assigned_rate_limits": { 00:04:16.125 "rw_ios_per_sec": 0, 00:04:16.125 "rw_mbytes_per_sec": 0, 00:04:16.125 "r_mbytes_per_sec": 0, 00:04:16.125 "w_mbytes_per_sec": 0 00:04:16.125 }, 00:04:16.125 "claimed": false, 00:04:16.125 "zoned": false, 00:04:16.125 "supported_io_types": { 00:04:16.125 "read": true, 00:04:16.125 "write": true, 00:04:16.125 "unmap": true, 00:04:16.125 "flush": true, 00:04:16.125 "reset": true, 00:04:16.125 "nvme_admin": false, 00:04:16.125 "nvme_io": false, 00:04:16.125 "nvme_io_md": false, 00:04:16.125 "write_zeroes": true, 00:04:16.125 "zcopy": true, 00:04:16.125 "get_zone_info": false, 00:04:16.125 "zone_management": false, 00:04:16.125 "zone_append": false, 00:04:16.125 "compare": false, 00:04:16.125 "compare_and_write": false, 00:04:16.125 "abort": true, 00:04:16.125 "seek_hole": false, 00:04:16.125 "seek_data": false, 00:04:16.125 "copy": true, 00:04:16.125 "nvme_iov_md": false 00:04:16.125 }, 00:04:16.125 "memory_domains": [ 00:04:16.125 { 00:04:16.125 "dma_device_id": "system", 00:04:16.125 "dma_device_type": 1 00:04:16.125 }, 00:04:16.125 { 00:04:16.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.125 "dma_device_type": 2 00:04:16.125 } 00:04:16.125 ], 00:04:16.125 "driver_specific": {} 00:04:16.125 } 00:04:16.125 ]' 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.125 [2024-07-11 05:50:31.917093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:16.125 [2024-07-11 05:50:31.917181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.125 [2024-07-11 05:50:31.917220] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:04:16.125 [2024-07-11 05:50:31.917246] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.125 [2024-07-11 05:50:31.919957] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.125 [2024-07-11 05:50:31.920054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.125 Passthru0 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.125 { 00:04:16.125 "name": "Malloc2", 00:04:16.125 "aliases": [ 00:04:16.125 "8502c481-3335-4b60-a07d-36b20de29e66" 00:04:16.125 ], 00:04:16.125 "product_name": "Malloc disk", 00:04:16.125 "block_size": 512, 00:04:16.125 "num_blocks": 16384, 00:04:16.125 "uuid": "8502c481-3335-4b60-a07d-36b20de29e66", 00:04:16.125 "assigned_rate_limits": { 00:04:16.125 "rw_ios_per_sec": 0, 00:04:16.125 "rw_mbytes_per_sec": 0, 00:04:16.125 "r_mbytes_per_sec": 0, 00:04:16.125 "w_mbytes_per_sec": 0 00:04:16.125 }, 00:04:16.125 "claimed": true, 00:04:16.125 "claim_type": "exclusive_write", 00:04:16.125 "zoned": false, 00:04:16.125 "supported_io_types": { 00:04:16.125 "read": true, 00:04:16.125 "write": true, 00:04:16.125 "unmap": true, 00:04:16.125 "flush": true, 00:04:16.125 "reset": true, 00:04:16.125 "nvme_admin": false, 00:04:16.125 "nvme_io": false, 00:04:16.125 "nvme_io_md": false, 00:04:16.125 "write_zeroes": true, 00:04:16.125 "zcopy": true, 00:04:16.125 "get_zone_info": false, 00:04:16.125 "zone_management": false, 00:04:16.125 "zone_append": false, 00:04:16.125 "compare": false, 00:04:16.125 "compare_and_write": false, 00:04:16.125 "abort": true, 00:04:16.125 "seek_hole": false, 00:04:16.125 "seek_data": false, 00:04:16.125 "copy": true, 00:04:16.125 "nvme_iov_md": false 00:04:16.125 }, 00:04:16.125 "memory_domains": [ 00:04:16.125 { 00:04:16.125 "dma_device_id": "system", 00:04:16.125 "dma_device_type": 1 00:04:16.125 }, 00:04:16.125 { 00:04:16.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.125 "dma_device_type": 2 00:04:16.125 } 00:04:16.125 ], 00:04:16.125 "driver_specific": {} 00:04:16.125 }, 00:04:16.125 { 00:04:16.125 "name": "Passthru0", 00:04:16.125 "aliases": [ 00:04:16.125 "e2ae8a37-43f9-5a9d-9c27-29927c88e2a4" 00:04:16.125 ], 00:04:16.125 "product_name": "passthru", 00:04:16.125 "block_size": 512, 00:04:16.125 "num_blocks": 16384, 00:04:16.125 "uuid": "e2ae8a37-43f9-5a9d-9c27-29927c88e2a4", 00:04:16.125 "assigned_rate_limits": { 00:04:16.125 "rw_ios_per_sec": 0, 00:04:16.125 "rw_mbytes_per_sec": 0, 00:04:16.125 "r_mbytes_per_sec": 0, 00:04:16.125 "w_mbytes_per_sec": 0 00:04:16.125 }, 00:04:16.125 "claimed": false, 00:04:16.125 "zoned": false, 00:04:16.125 "supported_io_types": { 00:04:16.125 "read": true, 00:04:16.125 "write": true, 00:04:16.125 "unmap": true, 00:04:16.125 "flush": true, 00:04:16.125 "reset": true, 00:04:16.125 "nvme_admin": false, 00:04:16.125 "nvme_io": false, 00:04:16.125 "nvme_io_md": false, 00:04:16.125 "write_zeroes": true, 00:04:16.125 "zcopy": true, 00:04:16.125 "get_zone_info": false, 00:04:16.125 "zone_management": false, 00:04:16.125 "zone_append": false, 00:04:16.125 "compare": false, 00:04:16.125 "compare_and_write": false, 00:04:16.125 "abort": true, 00:04:16.125 "seek_hole": false, 00:04:16.125 "seek_data": false, 00:04:16.125 "copy": true, 00:04:16.125 "nvme_iov_md": false 00:04:16.125 }, 00:04:16.125 "memory_domains": [ 00:04:16.125 { 00:04:16.125 "dma_device_id": "system", 00:04:16.125 "dma_device_type": 1 00:04:16.125 }, 00:04:16.125 { 00:04:16.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.125 "dma_device_type": 2 00:04:16.125 } 00:04:16.125 ], 00:04:16.125 "driver_specific": { 00:04:16.125 "passthru": { 00:04:16.125 "name": "Passthru0", 00:04:16.125 "base_bdev_name": "Malloc2" 00:04:16.125 } 00:04:16.125 } 00:04:16.125 } 00:04:16.125 ]' 00:04:16.125 05:50:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.125 05:50:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.125 05:50:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.125 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.125 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.125 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.126 05:50:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:16.126 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.126 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.384 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.384 05:50:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.384 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.384 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.384 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.385 05:50:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.385 05:50:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.385 05:50:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.385 00:04:16.385 real 0m0.348s 00:04:16.385 user 0m0.217s 00:04:16.385 sys 0m0.041s 00:04:16.385 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.385 05:50:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.385 ************************************ 00:04:16.385 END TEST rpc_daemon_integrity 00:04:16.385 ************************************ 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:16.385 05:50:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:16.385 05:50:32 rpc -- rpc/rpc.sh@84 -- # killprocess 59492 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@948 -- # '[' -z 59492 ']' 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@952 -- # kill -0 59492 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@953 -- # uname 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59492 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:16.385 killing process with pid 59492 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59492' 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@967 -- # kill 59492 00:04:16.385 05:50:32 rpc -- common/autotest_common.sh@972 -- # wait 59492 00:04:18.293 00:04:18.293 real 0m4.216s 00:04:18.293 user 0m5.024s 00:04:18.293 sys 0m0.701s 00:04:18.293 05:50:33 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.293 05:50:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.293 ************************************ 00:04:18.293 END TEST rpc 00:04:18.293 ************************************ 00:04:18.293 05:50:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:18.293 05:50:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.293 05:50:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.293 05:50:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.293 05:50:33 -- common/autotest_common.sh@10 -- # set +x 00:04:18.293 ************************************ 00:04:18.293 START TEST skip_rpc 00:04:18.293 ************************************ 00:04:18.293 05:50:33 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.293 * Looking for test storage... 00:04:18.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.293 05:50:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:18.293 05:50:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:18.293 05:50:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:18.293 05:50:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.293 05:50:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.293 05:50:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.293 ************************************ 00:04:18.293 START TEST skip_rpc 00:04:18.293 ************************************ 00:04:18.293 05:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:18.293 05:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59712 00:04:18.293 05:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:18.293 05:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.293 05:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:18.293 [2024-07-11 05:50:34.149734] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:18.293 [2024-07-11 05:50:34.150433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59712 ] 00:04:18.550 [2024-07-11 05:50:34.303944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.550 [2024-07-11 05:50:34.463195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.808 [2024-07-11 05:50:34.622510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59712 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59712 ']' 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59712 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59712 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:24.076 killing process with pid 59712 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59712' 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59712 00:04:24.076 05:50:39 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59712 00:04:25.450 00:04:25.451 real 0m7.013s 00:04:25.451 user 0m6.608s 00:04:25.451 sys 0m0.298s 00:04:25.451 05:50:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.451 05:50:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.451 ************************************ 00:04:25.451 END TEST skip_rpc 00:04:25.451 ************************************ 00:04:25.451 05:50:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.451 05:50:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:25.451 05:50:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.451 05:50:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.451 05:50:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.451 ************************************ 00:04:25.451 START TEST skip_rpc_with_json 00:04:25.451 ************************************ 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59806 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59806 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59806 ']' 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.451 05:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.451 [2024-07-11 05:50:41.230535] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:25.451 [2024-07-11 05:50:41.230718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59806 ] 00:04:25.710 [2024-07-11 05:50:41.390369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.710 [2024-07-11 05:50:41.557495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.969 [2024-07-11 05:50:41.724340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.537 [2024-07-11 05:50:42.240280] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:26.537 request: 00:04:26.537 { 00:04:26.537 "trtype": "tcp", 00:04:26.537 "method": "nvmf_get_transports", 00:04:26.537 "req_id": 1 00:04:26.537 } 00:04:26.537 Got JSON-RPC error response 00:04:26.537 response: 00:04:26.537 { 00:04:26.537 "code": -19, 00:04:26.537 "message": "No such device" 00:04:26.537 } 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.537 [2024-07-11 05:50:42.252418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.537 05:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.537 { 00:04:26.537 "subsystems": [ 00:04:26.537 { 00:04:26.537 "subsystem": "vfio_user_target", 00:04:26.537 "config": null 00:04:26.537 }, 00:04:26.537 { 00:04:26.537 "subsystem": "keyring", 00:04:26.538 "config": [] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "iobuf", 00:04:26.538 "config": [ 00:04:26.538 { 00:04:26.538 "method": "iobuf_set_options", 00:04:26.538 "params": { 00:04:26.538 "small_pool_count": 8192, 00:04:26.538 "large_pool_count": 1024, 00:04:26.538 "small_bufsize": 8192, 00:04:26.538 "large_bufsize": 135168 00:04:26.538 } 00:04:26.538 } 00:04:26.538 ] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "sock", 00:04:26.538 "config": [ 00:04:26.538 { 00:04:26.538 "method": "sock_set_default_impl", 00:04:26.538 "params": { 00:04:26.538 "impl_name": "uring" 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "sock_impl_set_options", 00:04:26.538 "params": { 00:04:26.538 "impl_name": "ssl", 00:04:26.538 "recv_buf_size": 4096, 00:04:26.538 "send_buf_size": 4096, 00:04:26.538 "enable_recv_pipe": true, 00:04:26.538 "enable_quickack": false, 00:04:26.538 "enable_placement_id": 0, 00:04:26.538 "enable_zerocopy_send_server": true, 00:04:26.538 "enable_zerocopy_send_client": false, 00:04:26.538 "zerocopy_threshold": 0, 00:04:26.538 "tls_version": 0, 00:04:26.538 "enable_ktls": false 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "sock_impl_set_options", 00:04:26.538 "params": { 00:04:26.538 "impl_name": "posix", 00:04:26.538 "recv_buf_size": 2097152, 00:04:26.538 "send_buf_size": 2097152, 00:04:26.538 "enable_recv_pipe": true, 00:04:26.538 "enable_quickack": false, 00:04:26.538 "enable_placement_id": 0, 00:04:26.538 "enable_zerocopy_send_server": true, 00:04:26.538 "enable_zerocopy_send_client": false, 00:04:26.538 "zerocopy_threshold": 0, 00:04:26.538 "tls_version": 0, 00:04:26.538 "enable_ktls": false 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "sock_impl_set_options", 00:04:26.538 "params": { 00:04:26.538 "impl_name": "uring", 00:04:26.538 "recv_buf_size": 2097152, 00:04:26.538 "send_buf_size": 2097152, 00:04:26.538 "enable_recv_pipe": true, 00:04:26.538 "enable_quickack": false, 00:04:26.538 "enable_placement_id": 0, 00:04:26.538 "enable_zerocopy_send_server": false, 00:04:26.538 "enable_zerocopy_send_client": false, 00:04:26.538 "zerocopy_threshold": 0, 00:04:26.538 "tls_version": 0, 00:04:26.538 "enable_ktls": false 00:04:26.538 } 00:04:26.538 } 00:04:26.538 ] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "vmd", 00:04:26.538 "config": [] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "accel", 00:04:26.538 "config": [ 00:04:26.538 { 00:04:26.538 "method": "accel_set_options", 00:04:26.538 "params": { 00:04:26.538 "small_cache_size": 128, 00:04:26.538 "large_cache_size": 16, 00:04:26.538 "task_count": 2048, 00:04:26.538 "sequence_count": 2048, 00:04:26.538 "buf_count": 2048 00:04:26.538 } 00:04:26.538 } 00:04:26.538 ] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "bdev", 00:04:26.538 "config": [ 00:04:26.538 { 00:04:26.538 "method": "bdev_set_options", 00:04:26.538 "params": { 00:04:26.538 "bdev_io_pool_size": 65535, 00:04:26.538 "bdev_io_cache_size": 256, 00:04:26.538 "bdev_auto_examine": true, 00:04:26.538 "iobuf_small_cache_size": 128, 00:04:26.538 "iobuf_large_cache_size": 16 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "bdev_raid_set_options", 00:04:26.538 "params": { 00:04:26.538 "process_window_size_kb": 1024 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "bdev_iscsi_set_options", 00:04:26.538 "params": { 00:04:26.538 "timeout_sec": 30 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "bdev_nvme_set_options", 00:04:26.538 "params": { 00:04:26.538 "action_on_timeout": "none", 00:04:26.538 "timeout_us": 0, 00:04:26.538 "timeout_admin_us": 0, 00:04:26.538 "keep_alive_timeout_ms": 10000, 00:04:26.538 "arbitration_burst": 0, 00:04:26.538 "low_priority_weight": 0, 00:04:26.538 "medium_priority_weight": 0, 00:04:26.538 "high_priority_weight": 0, 00:04:26.538 "nvme_adminq_poll_period_us": 10000, 00:04:26.538 "nvme_ioq_poll_period_us": 0, 00:04:26.538 "io_queue_requests": 0, 00:04:26.538 "delay_cmd_submit": true, 00:04:26.538 "transport_retry_count": 4, 00:04:26.538 "bdev_retry_count": 3, 00:04:26.538 "transport_ack_timeout": 0, 00:04:26.538 "ctrlr_loss_timeout_sec": 0, 00:04:26.538 "reconnect_delay_sec": 0, 00:04:26.538 "fast_io_fail_timeout_sec": 0, 00:04:26.538 "disable_auto_failback": false, 00:04:26.538 "generate_uuids": false, 00:04:26.538 "transport_tos": 0, 00:04:26.538 "nvme_error_stat": false, 00:04:26.538 "rdma_srq_size": 0, 00:04:26.538 "io_path_stat": false, 00:04:26.538 "allow_accel_sequence": false, 00:04:26.538 "rdma_max_cq_size": 0, 00:04:26.538 "rdma_cm_event_timeout_ms": 0, 00:04:26.538 "dhchap_digests": [ 00:04:26.538 "sha256", 00:04:26.538 "sha384", 00:04:26.538 "sha512" 00:04:26.538 ], 00:04:26.538 "dhchap_dhgroups": [ 00:04:26.538 "null", 00:04:26.538 "ffdhe2048", 00:04:26.538 "ffdhe3072", 00:04:26.538 "ffdhe4096", 00:04:26.538 "ffdhe6144", 00:04:26.538 "ffdhe8192" 00:04:26.538 ] 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "bdev_nvme_set_hotplug", 00:04:26.538 "params": { 00:04:26.538 "period_us": 100000, 00:04:26.538 "enable": false 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "bdev_wait_for_examine" 00:04:26.538 } 00:04:26.538 ] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "scsi", 00:04:26.538 "config": null 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "scheduler", 00:04:26.538 "config": [ 00:04:26.538 { 00:04:26.538 "method": "framework_set_scheduler", 00:04:26.538 "params": { 00:04:26.538 "name": "static" 00:04:26.538 } 00:04:26.538 } 00:04:26.538 ] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "vhost_scsi", 00:04:26.538 "config": [] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "vhost_blk", 00:04:26.538 "config": [] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "ublk", 00:04:26.538 "config": [] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "nbd", 00:04:26.538 "config": [] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "nvmf", 00:04:26.538 "config": [ 00:04:26.538 { 00:04:26.538 "method": "nvmf_set_config", 00:04:26.538 "params": { 00:04:26.538 "discovery_filter": "match_any", 00:04:26.538 "admin_cmd_passthru": { 00:04:26.538 "identify_ctrlr": false 00:04:26.538 } 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "nvmf_set_max_subsystems", 00:04:26.538 "params": { 00:04:26.538 "max_subsystems": 1024 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "nvmf_set_crdt", 00:04:26.538 "params": { 00:04:26.538 "crdt1": 0, 00:04:26.538 "crdt2": 0, 00:04:26.538 "crdt3": 0 00:04:26.538 } 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "method": "nvmf_create_transport", 00:04:26.538 "params": { 00:04:26.538 "trtype": "TCP", 00:04:26.538 "max_queue_depth": 128, 00:04:26.538 "max_io_qpairs_per_ctrlr": 127, 00:04:26.538 "in_capsule_data_size": 4096, 00:04:26.538 "max_io_size": 131072, 00:04:26.538 "io_unit_size": 131072, 00:04:26.538 "max_aq_depth": 128, 00:04:26.538 "num_shared_buffers": 511, 00:04:26.538 "buf_cache_size": 4294967295, 00:04:26.538 "dif_insert_or_strip": false, 00:04:26.538 "zcopy": false, 00:04:26.538 "c2h_success": true, 00:04:26.538 "sock_priority": 0, 00:04:26.538 "abort_timeout_sec": 1, 00:04:26.538 "ack_timeout": 0, 00:04:26.538 "data_wr_pool_size": 0 00:04:26.538 } 00:04:26.538 } 00:04:26.538 ] 00:04:26.538 }, 00:04:26.538 { 00:04:26.538 "subsystem": "iscsi", 00:04:26.538 "config": [ 00:04:26.538 { 00:04:26.538 "method": "iscsi_set_options", 00:04:26.538 "params": { 00:04:26.538 "node_base": "iqn.2016-06.io.spdk", 00:04:26.538 "max_sessions": 128, 00:04:26.538 "max_connections_per_session": 2, 00:04:26.538 "max_queue_depth": 64, 00:04:26.538 "default_time2wait": 2, 00:04:26.538 "default_time2retain": 20, 00:04:26.538 "first_burst_length": 8192, 00:04:26.538 "immediate_data": true, 00:04:26.538 "allow_duplicated_isid": false, 00:04:26.538 "error_recovery_level": 0, 00:04:26.538 "nop_timeout": 60, 00:04:26.538 "nop_in_interval": 30, 00:04:26.538 "disable_chap": false, 00:04:26.538 "require_chap": false, 00:04:26.538 "mutual_chap": false, 00:04:26.538 "chap_group": 0, 00:04:26.538 "max_large_datain_per_connection": 64, 00:04:26.538 "max_r2t_per_connection": 4, 00:04:26.538 "pdu_pool_size": 36864, 00:04:26.538 "immediate_data_pool_size": 16384, 00:04:26.538 "data_out_pool_size": 2048 00:04:26.538 } 00:04:26.538 } 00:04:26.538 ] 00:04:26.538 } 00:04:26.538 ] 00:04:26.538 } 00:04:26.538 05:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:26.538 05:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59806 00:04:26.538 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59806 ']' 00:04:26.538 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59806 00:04:26.538 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:26.538 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.539 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59806 00:04:26.796 killing process with pid 59806 00:04:26.796 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.796 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.796 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59806' 00:04:26.796 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59806 00:04:26.796 05:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59806 00:04:28.698 05:50:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59851 00:04:28.698 05:50:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.698 05:50:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59851 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59851 ']' 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59851 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59851 00:04:33.968 killing process with pid 59851 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59851' 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59851 00:04:33.968 05:50:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59851 00:04:35.345 05:50:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.345 05:50:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.345 ************************************ 00:04:35.345 END TEST skip_rpc_with_json 00:04:35.345 ************************************ 00:04:35.345 00:04:35.345 real 0m10.095s 00:04:35.345 user 0m9.768s 00:04:35.345 sys 0m0.696s 00:04:35.345 05:50:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.345 05:50:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.345 05:50:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.345 05:50:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:35.345 05:50:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.345 05:50:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.345 05:50:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.605 ************************************ 00:04:35.605 START TEST skip_rpc_with_delay 00:04:35.605 ************************************ 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.605 [2024-07-11 05:50:51.389289] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:35.605 [2024-07-11 05:50:51.389443] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:35.605 00:04:35.605 real 0m0.185s 00:04:35.605 user 0m0.098s 00:04:35.605 sys 0m0.085s 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.605 05:50:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:35.605 ************************************ 00:04:35.605 END TEST skip_rpc_with_delay 00:04:35.605 ************************************ 00:04:35.605 05:50:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.605 05:50:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:35.605 05:50:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:35.605 05:50:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:35.605 05:50:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.605 05:50:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.605 05:50:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.605 ************************************ 00:04:35.605 START TEST exit_on_failed_rpc_init 00:04:35.605 ************************************ 00:04:35.605 05:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:35.605 05:50:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59985 00:04:35.605 05:50:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.605 05:50:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59985 00:04:35.605 05:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59985 ']' 00:04:35.605 05:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.605 05:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.605 05:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.605 05:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.605 05:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.864 [2024-07-11 05:50:51.639994] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:35.864 [2024-07-11 05:50:51.640196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:04:36.123 [2024-07-11 05:50:51.817024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.123 [2024-07-11 05:50:51.999376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.382 [2024-07-11 05:50:52.170550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:37.057 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.057 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:37.057 05:50:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.057 05:50:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.057 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:37.057 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.057 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.057 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:37.057 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.057 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:37.058 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.058 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:37.058 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.058 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:37.058 05:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.058 [2024-07-11 05:50:52.865239] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:37.058 [2024-07-11 05:50:52.865404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60003 ] 00:04:37.317 [2024-07-11 05:50:53.035560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.576 [2024-07-11 05:50:53.261145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.576 [2024-07-11 05:50:53.261320] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:37.576 [2024-07-11 05:50:53.261343] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:37.576 [2024-07-11 05:50:53.261368] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59985 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59985 ']' 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59985 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59985 00:04:37.834 killing process with pid 59985 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59985' 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59985 00:04:37.834 05:50:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59985 00:04:40.366 00:04:40.366 real 0m4.212s 00:04:40.366 user 0m4.968s 00:04:40.366 sys 0m0.519s 00:04:40.366 05:50:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.366 05:50:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.366 ************************************ 00:04:40.366 END TEST exit_on_failed_rpc_init 00:04:40.366 ************************************ 00:04:40.366 05:50:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.366 05:50:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.366 00:04:40.366 real 0m21.801s 00:04:40.366 user 0m21.542s 00:04:40.366 sys 0m1.776s 00:04:40.366 ************************************ 00:04:40.366 END TEST skip_rpc 00:04:40.366 ************************************ 00:04:40.366 05:50:55 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.366 05:50:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.366 05:50:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.366 05:50:55 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.366 05:50:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.366 05:50:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.366 05:50:55 -- common/autotest_common.sh@10 -- # set +x 00:04:40.366 ************************************ 00:04:40.366 START TEST rpc_client 00:04:40.366 ************************************ 00:04:40.366 05:50:55 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.366 * Looking for test storage... 00:04:40.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:40.366 05:50:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:40.366 OK 00:04:40.366 05:50:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.366 00:04:40.366 real 0m0.140s 00:04:40.366 user 0m0.061s 00:04:40.366 sys 0m0.086s 00:04:40.366 05:50:55 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.366 05:50:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:40.366 ************************************ 00:04:40.366 END TEST rpc_client 00:04:40.366 ************************************ 00:04:40.366 05:50:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.366 05:50:55 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.366 05:50:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.366 05:50:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.366 05:50:55 -- common/autotest_common.sh@10 -- # set +x 00:04:40.366 ************************************ 00:04:40.366 START TEST json_config 00:04:40.366 ************************************ 00:04:40.366 05:50:56 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.367 05:50:56 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.367 05:50:56 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.367 05:50:56 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.367 05:50:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.367 05:50:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.367 05:50:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.367 05:50:56 json_config -- paths/export.sh@5 -- # export PATH 00:04:40.367 05:50:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@47 -- # : 0 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:40.367 05:50:56 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:40.367 INFO: JSON configuration test init 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:40.367 05:50:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.367 05:50:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:40.367 05:50:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.367 05:50:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.367 05:50:56 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:40.367 05:50:56 json_config -- json_config/common.sh@9 -- # local app=target 00:04:40.367 05:50:56 json_config -- json_config/common.sh@10 -- # shift 00:04:40.367 Waiting for target to run... 00:04:40.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.367 05:50:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:40.367 05:50:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:40.367 05:50:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:40.367 05:50:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.367 05:50:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.367 05:50:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60151 00:04:40.367 05:50:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:40.367 05:50:56 json_config -- json_config/common.sh@25 -- # waitforlisten 60151 /var/tmp/spdk_tgt.sock 00:04:40.367 05:50:56 json_config -- common/autotest_common.sh@829 -- # '[' -z 60151 ']' 00:04:40.367 05:50:56 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.367 05:50:56 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:40.367 05:50:56 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.367 05:50:56 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.367 05:50:56 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.367 05:50:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.367 [2024-07-11 05:50:56.223350] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:40.367 [2024-07-11 05:50:56.223743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60151 ] 00:04:40.935 [2024-07-11 05:50:56.562553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.935 [2024-07-11 05:50:56.772150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.503 05:50:57 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.503 05:50:57 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:41.503 05:50:57 json_config -- json_config/common.sh@26 -- # echo '' 00:04:41.503 00:04:41.503 05:50:57 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:41.503 05:50:57 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:41.503 05:50:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.503 05:50:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.503 05:50:57 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:41.503 05:50:57 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:41.503 05:50:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.503 05:50:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.503 05:50:57 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:41.503 05:50:57 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:41.503 05:50:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:41.763 [2024-07-11 05:50:57.523739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:42.331 05:50:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.331 05:50:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:42.331 05:50:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:42.331 05:50:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.331 05:50:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:42.331 05:50:58 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:42.331 05:50:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.331 05:50:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.590 05:50:58 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:42.590 05:50:58 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:42.590 05:50:58 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:42.590 05:50:58 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.590 05:50:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.848 MallocForNvmf0 00:04:42.848 05:50:58 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.848 05:50:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.848 MallocForNvmf1 00:04:43.107 05:50:58 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:43.107 05:50:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:43.107 [2024-07-11 05:50:59.007146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.107 05:50:59 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.107 05:50:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.365 05:50:59 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.365 05:50:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.624 05:50:59 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.624 05:50:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.883 05:50:59 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:43.883 05:50:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:44.142 [2024-07-11 05:50:59.827691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.142 05:50:59 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:44.142 05:50:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.142 05:50:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.142 05:50:59 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:44.142 05:50:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.142 05:50:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.142 05:50:59 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:44.142 05:50:59 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.142 05:50:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.400 MallocBdevForConfigChangeCheck 00:04:44.400 05:51:00 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:44.400 05:51:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.400 05:51:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.400 05:51:00 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:44.400 05:51:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.967 INFO: shutting down applications... 00:04:44.967 05:51:00 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:44.967 05:51:00 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:44.967 05:51:00 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:44.967 05:51:00 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:44.967 05:51:00 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:45.226 Calling clear_iscsi_subsystem 00:04:45.226 Calling clear_nvmf_subsystem 00:04:45.226 Calling clear_nbd_subsystem 00:04:45.226 Calling clear_ublk_subsystem 00:04:45.226 Calling clear_vhost_blk_subsystem 00:04:45.226 Calling clear_vhost_scsi_subsystem 00:04:45.226 Calling clear_bdev_subsystem 00:04:45.226 05:51:00 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:45.226 05:51:00 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:45.226 05:51:00 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:45.226 05:51:00 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.226 05:51:00 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:45.226 05:51:00 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:45.485 05:51:01 json_config -- json_config/json_config.sh@345 -- # break 00:04:45.485 05:51:01 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:45.485 05:51:01 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:45.485 05:51:01 json_config -- json_config/common.sh@31 -- # local app=target 00:04:45.485 05:51:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.485 05:51:01 json_config -- json_config/common.sh@35 -- # [[ -n 60151 ]] 00:04:45.485 05:51:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60151 00:04:45.485 05:51:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.485 05:51:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.485 05:51:01 json_config -- json_config/common.sh@41 -- # kill -0 60151 00:04:45.485 05:51:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.053 05:51:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.053 05:51:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.053 05:51:01 json_config -- json_config/common.sh@41 -- # kill -0 60151 00:04:46.053 05:51:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.621 05:51:02 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.621 05:51:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.621 05:51:02 json_config -- json_config/common.sh@41 -- # kill -0 60151 00:04:46.621 05:51:02 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.621 05:51:02 json_config -- json_config/common.sh@43 -- # break 00:04:46.621 SPDK target shutdown done 00:04:46.621 05:51:02 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.621 05:51:02 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.621 INFO: relaunching applications... 00:04:46.621 05:51:02 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:46.621 05:51:02 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:46.621 05:51:02 json_config -- json_config/common.sh@9 -- # local app=target 00:04:46.621 05:51:02 json_config -- json_config/common.sh@10 -- # shift 00:04:46.621 05:51:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.621 05:51:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.621 05:51:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.621 05:51:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.621 05:51:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.621 05:51:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60349 00:04:46.621 Waiting for target to run... 00:04:46.621 05:51:02 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:46.621 05:51:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.621 05:51:02 json_config -- json_config/common.sh@25 -- # waitforlisten 60349 /var/tmp/spdk_tgt.sock 00:04:46.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.621 05:51:02 json_config -- common/autotest_common.sh@829 -- # '[' -z 60349 ']' 00:04:46.621 05:51:02 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.622 05:51:02 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.622 05:51:02 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.622 05:51:02 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.622 05:51:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.622 [2024-07-11 05:51:02.494011] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:46.622 [2024-07-11 05:51:02.494467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60349 ] 00:04:47.189 [2024-07-11 05:51:02.828180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.189 [2024-07-11 05:51:02.990412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.449 [2024-07-11 05:51:03.259354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:48.015 [2024-07-11 05:51:03.866998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.015 [2024-07-11 05:51:03.899096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.272 00:04:48.272 INFO: Checking if target configuration is the same... 00:04:48.272 05:51:03 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.272 05:51:03 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:48.272 05:51:03 json_config -- json_config/common.sh@26 -- # echo '' 00:04:48.272 05:51:03 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:48.272 05:51:03 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:48.272 05:51:03 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:48.272 05:51:03 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.272 05:51:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.272 + '[' 2 -ne 2 ']' 00:04:48.272 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:48.272 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:48.272 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:48.272 +++ basename /dev/fd/62 00:04:48.272 ++ mktemp /tmp/62.XXX 00:04:48.272 + tmp_file_1=/tmp/62.wD7 00:04:48.272 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.272 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.272 + tmp_file_2=/tmp/spdk_tgt_config.json.599 00:04:48.272 + ret=0 00:04:48.272 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:48.530 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:48.530 + diff -u /tmp/62.wD7 /tmp/spdk_tgt_config.json.599 00:04:48.530 INFO: JSON config files are the same 00:04:48.530 + echo 'INFO: JSON config files are the same' 00:04:48.530 + rm /tmp/62.wD7 /tmp/spdk_tgt_config.json.599 00:04:48.530 + exit 0 00:04:48.530 INFO: changing configuration and checking if this can be detected... 00:04:48.530 05:51:04 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:48.530 05:51:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:48.530 05:51:04 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.530 05:51:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.788 05:51:04 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:48.788 05:51:04 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.788 05:51:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.788 + '[' 2 -ne 2 ']' 00:04:48.788 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:48.788 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:48.788 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:48.788 +++ basename /dev/fd/62 00:04:48.788 ++ mktemp /tmp/62.XXX 00:04:48.788 + tmp_file_1=/tmp/62.8CN 00:04:48.788 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.788 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.788 + tmp_file_2=/tmp/spdk_tgt_config.json.3bk 00:04:48.788 + ret=0 00:04:48.788 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.356 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.356 + diff -u /tmp/62.8CN /tmp/spdk_tgt_config.json.3bk 00:04:49.356 + ret=1 00:04:49.356 + echo '=== Start of file: /tmp/62.8CN ===' 00:04:49.356 + cat /tmp/62.8CN 00:04:49.356 + echo '=== End of file: /tmp/62.8CN ===' 00:04:49.356 + echo '' 00:04:49.356 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3bk ===' 00:04:49.356 + cat /tmp/spdk_tgt_config.json.3bk 00:04:49.356 + echo '=== End of file: /tmp/spdk_tgt_config.json.3bk ===' 00:04:49.356 + echo '' 00:04:49.356 + rm /tmp/62.8CN /tmp/spdk_tgt_config.json.3bk 00:04:49.356 + exit 1 00:04:49.356 INFO: configuration change detected. 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:49.356 05:51:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.356 05:51:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@317 -- # [[ -n 60349 ]] 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:49.356 05:51:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.356 05:51:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:49.356 05:51:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.356 05:51:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.356 05:51:05 json_config -- json_config/json_config.sh@323 -- # killprocess 60349 00:04:49.356 05:51:05 json_config -- common/autotest_common.sh@948 -- # '[' -z 60349 ']' 00:04:49.356 05:51:05 json_config -- common/autotest_common.sh@952 -- # kill -0 60349 00:04:49.357 05:51:05 json_config -- common/autotest_common.sh@953 -- # uname 00:04:49.357 05:51:05 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.357 05:51:05 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60349 00:04:49.357 killing process with pid 60349 00:04:49.357 05:51:05 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.357 05:51:05 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.357 05:51:05 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60349' 00:04:49.357 05:51:05 json_config -- common/autotest_common.sh@967 -- # kill 60349 00:04:49.357 05:51:05 json_config -- common/autotest_common.sh@972 -- # wait 60349 00:04:50.294 05:51:05 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.294 05:51:05 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:50.294 05:51:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:50.294 05:51:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.294 INFO: Success 00:04:50.294 05:51:05 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:50.294 05:51:05 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:50.294 00:04:50.294 real 0m9.894s 00:04:50.294 user 0m13.139s 00:04:50.294 sys 0m1.691s 00:04:50.295 ************************************ 00:04:50.295 END TEST json_config 00:04:50.295 ************************************ 00:04:50.295 05:51:05 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.295 05:51:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.295 05:51:05 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.295 05:51:05 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.295 05:51:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.295 05:51:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.295 05:51:05 -- common/autotest_common.sh@10 -- # set +x 00:04:50.295 ************************************ 00:04:50.295 START TEST json_config_extra_key 00:04:50.295 ************************************ 00:04:50.295 05:51:05 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.295 05:51:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.295 05:51:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.295 05:51:06 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.295 05:51:06 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.295 05:51:06 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.295 05:51:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.295 05:51:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.295 05:51:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.295 05:51:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:50.295 05:51:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:50.295 05:51:06 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.295 INFO: launching applications... 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:50.295 05:51:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.295 Waiting for target to run... 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60502 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60502 /var/tmp/spdk_tgt.sock 00:04:50.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.295 05:51:06 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 60502 ']' 00:04:50.295 05:51:06 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.295 05:51:06 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.295 05:51:06 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.295 05:51:06 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.295 05:51:06 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.295 05:51:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.295 [2024-07-11 05:51:06.143931] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:50.295 [2024-07-11 05:51:06.144135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60502 ] 00:04:50.863 [2024-07-11 05:51:06.493248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.863 [2024-07-11 05:51:06.635515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.863 [2024-07-11 05:51:06.783232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:51.431 00:04:51.431 INFO: shutting down applications... 00:04:51.431 05:51:07 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.431 05:51:07 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:51.431 05:51:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.431 05:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.431 05:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.431 05:51:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.431 05:51:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.431 05:51:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60502 ]] 00:04:51.431 05:51:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60502 00:04:51.431 05:51:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.431 05:51:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.431 05:51:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60502 00:04:51.431 05:51:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.069 05:51:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.069 05:51:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.069 05:51:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60502 00:04:52.069 05:51:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.328 05:51:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.328 05:51:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.328 05:51:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60502 00:04:52.328 05:51:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.922 05:51:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.922 05:51:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.922 05:51:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60502 00:04:52.922 05:51:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.488 05:51:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.488 05:51:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.488 05:51:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60502 00:04:53.488 SPDK target shutdown done 00:04:53.488 Success 00:04:53.488 05:51:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.488 05:51:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:53.488 05:51:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.488 05:51:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.488 05:51:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:53.488 00:04:53.488 real 0m3.212s 00:04:53.488 user 0m3.065s 00:04:53.488 sys 0m0.445s 00:04:53.488 ************************************ 00:04:53.488 END TEST json_config_extra_key 00:04:53.488 ************************************ 00:04:53.488 05:51:09 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.488 05:51:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.488 05:51:09 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.488 05:51:09 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.488 05:51:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.488 05:51:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.488 05:51:09 -- common/autotest_common.sh@10 -- # set +x 00:04:53.488 ************************************ 00:04:53.488 START TEST alias_rpc 00:04:53.488 ************************************ 00:04:53.488 05:51:09 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.488 * Looking for test storage... 00:04:53.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:53.489 05:51:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.489 05:51:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60587 00:04:53.489 05:51:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60587 00:04:53.489 05:51:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.489 05:51:09 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 60587 ']' 00:04:53.489 05:51:09 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.489 05:51:09 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.489 05:51:09 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.489 05:51:09 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.489 05:51:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.746 [2024-07-11 05:51:09.415103] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:53.746 [2024-07-11 05:51:09.415540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60587 ] 00:04:53.746 [2024-07-11 05:51:09.582400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.005 [2024-07-11 05:51:09.766516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.263 [2024-07-11 05:51:09.945329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.521 05:51:10 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.521 05:51:10 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:54.521 05:51:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:55.089 05:51:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60587 00:04:55.089 05:51:10 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 60587 ']' 00:04:55.089 05:51:10 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 60587 00:04:55.089 05:51:10 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:55.089 05:51:10 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.089 05:51:10 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60587 00:04:55.089 killing process with pid 60587 00:04:55.089 05:51:10 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.089 05:51:10 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.089 05:51:10 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60587' 00:04:55.089 05:51:10 alias_rpc -- common/autotest_common.sh@967 -- # kill 60587 00:04:55.089 05:51:10 alias_rpc -- common/autotest_common.sh@972 -- # wait 60587 00:04:56.990 00:04:56.990 real 0m3.315s 00:04:56.990 user 0m3.492s 00:04:56.990 sys 0m0.468s 00:04:56.990 05:51:12 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.990 ************************************ 00:04:56.990 END TEST alias_rpc 00:04:56.990 ************************************ 00:04:56.990 05:51:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.990 05:51:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.990 05:51:12 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:56.990 05:51:12 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.990 05:51:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.990 05:51:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.990 05:51:12 -- common/autotest_common.sh@10 -- # set +x 00:04:56.990 ************************************ 00:04:56.990 START TEST spdkcli_tcp 00:04:56.990 ************************************ 00:04:56.990 05:51:12 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.990 * Looking for test storage... 00:04:56.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:56.990 05:51:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:56.990 05:51:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:56.990 05:51:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:56.990 05:51:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:56.990 05:51:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:56.990 05:51:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:56.990 05:51:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:56.990 05:51:12 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.990 05:51:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.990 05:51:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60680 00:04:56.990 05:51:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60680 00:04:56.990 05:51:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:56.991 05:51:12 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 60680 ']' 00:04:56.991 05:51:12 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.991 05:51:12 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.991 05:51:12 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.991 05:51:12 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.991 05:51:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.991 [2024-07-11 05:51:12.778061] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:04:56.991 [2024-07-11 05:51:12.778230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60680 ] 00:04:57.249 [2024-07-11 05:51:12.946273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.249 [2024-07-11 05:51:13.091263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.249 [2024-07-11 05:51:13.091280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.507 [2024-07-11 05:51:13.236382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.078 05:51:13 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.078 05:51:13 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:58.078 05:51:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:58.078 05:51:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60697 00:04:58.078 05:51:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:58.078 [ 00:04:58.078 "bdev_malloc_delete", 00:04:58.078 "bdev_malloc_create", 00:04:58.078 "bdev_null_resize", 00:04:58.078 "bdev_null_delete", 00:04:58.078 "bdev_null_create", 00:04:58.078 "bdev_nvme_cuse_unregister", 00:04:58.078 "bdev_nvme_cuse_register", 00:04:58.078 "bdev_opal_new_user", 00:04:58.078 "bdev_opal_set_lock_state", 00:04:58.078 "bdev_opal_delete", 00:04:58.078 "bdev_opal_get_info", 00:04:58.078 "bdev_opal_create", 00:04:58.078 "bdev_nvme_opal_revert", 00:04:58.078 "bdev_nvme_opal_init", 00:04:58.078 "bdev_nvme_send_cmd", 00:04:58.078 "bdev_nvme_get_path_iostat", 00:04:58.078 "bdev_nvme_get_mdns_discovery_info", 00:04:58.078 "bdev_nvme_stop_mdns_discovery", 00:04:58.078 "bdev_nvme_start_mdns_discovery", 00:04:58.078 "bdev_nvme_set_multipath_policy", 00:04:58.078 "bdev_nvme_set_preferred_path", 00:04:58.078 "bdev_nvme_get_io_paths", 00:04:58.078 "bdev_nvme_remove_error_injection", 00:04:58.078 "bdev_nvme_add_error_injection", 00:04:58.078 "bdev_nvme_get_discovery_info", 00:04:58.078 "bdev_nvme_stop_discovery", 00:04:58.078 "bdev_nvme_start_discovery", 00:04:58.078 "bdev_nvme_get_controller_health_info", 00:04:58.078 "bdev_nvme_disable_controller", 00:04:58.078 "bdev_nvme_enable_controller", 00:04:58.078 "bdev_nvme_reset_controller", 00:04:58.078 "bdev_nvme_get_transport_statistics", 00:04:58.078 "bdev_nvme_apply_firmware", 00:04:58.078 "bdev_nvme_detach_controller", 00:04:58.078 "bdev_nvme_get_controllers", 00:04:58.078 "bdev_nvme_attach_controller", 00:04:58.078 "bdev_nvme_set_hotplug", 00:04:58.078 "bdev_nvme_set_options", 00:04:58.078 "bdev_passthru_delete", 00:04:58.078 "bdev_passthru_create", 00:04:58.078 "bdev_lvol_set_parent_bdev", 00:04:58.078 "bdev_lvol_set_parent", 00:04:58.078 "bdev_lvol_check_shallow_copy", 00:04:58.078 "bdev_lvol_start_shallow_copy", 00:04:58.078 "bdev_lvol_grow_lvstore", 00:04:58.078 "bdev_lvol_get_lvols", 00:04:58.078 "bdev_lvol_get_lvstores", 00:04:58.078 "bdev_lvol_delete", 00:04:58.078 "bdev_lvol_set_read_only", 00:04:58.078 "bdev_lvol_resize", 00:04:58.078 "bdev_lvol_decouple_parent", 00:04:58.078 "bdev_lvol_inflate", 00:04:58.078 "bdev_lvol_rename", 00:04:58.078 "bdev_lvol_clone_bdev", 00:04:58.078 "bdev_lvol_clone", 00:04:58.078 "bdev_lvol_snapshot", 00:04:58.078 "bdev_lvol_create", 00:04:58.078 "bdev_lvol_delete_lvstore", 00:04:58.078 "bdev_lvol_rename_lvstore", 00:04:58.078 "bdev_lvol_create_lvstore", 00:04:58.078 "bdev_raid_set_options", 00:04:58.078 "bdev_raid_remove_base_bdev", 00:04:58.078 "bdev_raid_add_base_bdev", 00:04:58.078 "bdev_raid_delete", 00:04:58.078 "bdev_raid_create", 00:04:58.078 "bdev_raid_get_bdevs", 00:04:58.078 "bdev_error_inject_error", 00:04:58.078 "bdev_error_delete", 00:04:58.078 "bdev_error_create", 00:04:58.078 "bdev_split_delete", 00:04:58.078 "bdev_split_create", 00:04:58.078 "bdev_delay_delete", 00:04:58.078 "bdev_delay_create", 00:04:58.078 "bdev_delay_update_latency", 00:04:58.078 "bdev_zone_block_delete", 00:04:58.078 "bdev_zone_block_create", 00:04:58.078 "blobfs_create", 00:04:58.078 "blobfs_detect", 00:04:58.078 "blobfs_set_cache_size", 00:04:58.078 "bdev_aio_delete", 00:04:58.078 "bdev_aio_rescan", 00:04:58.078 "bdev_aio_create", 00:04:58.078 "bdev_ftl_set_property", 00:04:58.078 "bdev_ftl_get_properties", 00:04:58.078 "bdev_ftl_get_stats", 00:04:58.078 "bdev_ftl_unmap", 00:04:58.078 "bdev_ftl_unload", 00:04:58.078 "bdev_ftl_delete", 00:04:58.078 "bdev_ftl_load", 00:04:58.078 "bdev_ftl_create", 00:04:58.078 "bdev_virtio_attach_controller", 00:04:58.078 "bdev_virtio_scsi_get_devices", 00:04:58.078 "bdev_virtio_detach_controller", 00:04:58.078 "bdev_virtio_blk_set_hotplug", 00:04:58.078 "bdev_iscsi_delete", 00:04:58.078 "bdev_iscsi_create", 00:04:58.078 "bdev_iscsi_set_options", 00:04:58.078 "bdev_uring_delete", 00:04:58.078 "bdev_uring_rescan", 00:04:58.078 "bdev_uring_create", 00:04:58.078 "accel_error_inject_error", 00:04:58.078 "ioat_scan_accel_module", 00:04:58.078 "dsa_scan_accel_module", 00:04:58.078 "iaa_scan_accel_module", 00:04:58.078 "vfu_virtio_create_scsi_endpoint", 00:04:58.078 "vfu_virtio_scsi_remove_target", 00:04:58.078 "vfu_virtio_scsi_add_target", 00:04:58.078 "vfu_virtio_create_blk_endpoint", 00:04:58.078 "vfu_virtio_delete_endpoint", 00:04:58.078 "keyring_file_remove_key", 00:04:58.078 "keyring_file_add_key", 00:04:58.078 "keyring_linux_set_options", 00:04:58.078 "iscsi_get_histogram", 00:04:58.078 "iscsi_enable_histogram", 00:04:58.078 "iscsi_set_options", 00:04:58.078 "iscsi_get_auth_groups", 00:04:58.078 "iscsi_auth_group_remove_secret", 00:04:58.078 "iscsi_auth_group_add_secret", 00:04:58.078 "iscsi_delete_auth_group", 00:04:58.078 "iscsi_create_auth_group", 00:04:58.078 "iscsi_set_discovery_auth", 00:04:58.078 "iscsi_get_options", 00:04:58.078 "iscsi_target_node_request_logout", 00:04:58.078 "iscsi_target_node_set_redirect", 00:04:58.078 "iscsi_target_node_set_auth", 00:04:58.078 "iscsi_target_node_add_lun", 00:04:58.078 "iscsi_get_stats", 00:04:58.078 "iscsi_get_connections", 00:04:58.078 "iscsi_portal_group_set_auth", 00:04:58.078 "iscsi_start_portal_group", 00:04:58.078 "iscsi_delete_portal_group", 00:04:58.078 "iscsi_create_portal_group", 00:04:58.078 "iscsi_get_portal_groups", 00:04:58.078 "iscsi_delete_target_node", 00:04:58.078 "iscsi_target_node_remove_pg_ig_maps", 00:04:58.078 "iscsi_target_node_add_pg_ig_maps", 00:04:58.078 "iscsi_create_target_node", 00:04:58.078 "iscsi_get_target_nodes", 00:04:58.078 "iscsi_delete_initiator_group", 00:04:58.078 "iscsi_initiator_group_remove_initiators", 00:04:58.078 "iscsi_initiator_group_add_initiators", 00:04:58.078 "iscsi_create_initiator_group", 00:04:58.078 "iscsi_get_initiator_groups", 00:04:58.078 "nvmf_set_crdt", 00:04:58.078 "nvmf_set_config", 00:04:58.078 "nvmf_set_max_subsystems", 00:04:58.078 "nvmf_stop_mdns_prr", 00:04:58.079 "nvmf_publish_mdns_prr", 00:04:58.079 "nvmf_subsystem_get_listeners", 00:04:58.079 "nvmf_subsystem_get_qpairs", 00:04:58.079 "nvmf_subsystem_get_controllers", 00:04:58.079 "nvmf_get_stats", 00:04:58.079 "nvmf_get_transports", 00:04:58.079 "nvmf_create_transport", 00:04:58.079 "nvmf_get_targets", 00:04:58.079 "nvmf_delete_target", 00:04:58.079 "nvmf_create_target", 00:04:58.079 "nvmf_subsystem_allow_any_host", 00:04:58.079 "nvmf_subsystem_remove_host", 00:04:58.079 "nvmf_subsystem_add_host", 00:04:58.079 "nvmf_ns_remove_host", 00:04:58.079 "nvmf_ns_add_host", 00:04:58.079 "nvmf_subsystem_remove_ns", 00:04:58.079 "nvmf_subsystem_add_ns", 00:04:58.079 "nvmf_subsystem_listener_set_ana_state", 00:04:58.079 "nvmf_discovery_get_referrals", 00:04:58.079 "nvmf_discovery_remove_referral", 00:04:58.079 "nvmf_discovery_add_referral", 00:04:58.079 "nvmf_subsystem_remove_listener", 00:04:58.079 "nvmf_subsystem_add_listener", 00:04:58.079 "nvmf_delete_subsystem", 00:04:58.079 "nvmf_create_subsystem", 00:04:58.079 "nvmf_get_subsystems", 00:04:58.079 "env_dpdk_get_mem_stats", 00:04:58.079 "nbd_get_disks", 00:04:58.079 "nbd_stop_disk", 00:04:58.079 "nbd_start_disk", 00:04:58.079 "ublk_recover_disk", 00:04:58.079 "ublk_get_disks", 00:04:58.079 "ublk_stop_disk", 00:04:58.079 "ublk_start_disk", 00:04:58.079 "ublk_destroy_target", 00:04:58.079 "ublk_create_target", 00:04:58.079 "virtio_blk_create_transport", 00:04:58.079 "virtio_blk_get_transports", 00:04:58.079 "vhost_controller_set_coalescing", 00:04:58.079 "vhost_get_controllers", 00:04:58.079 "vhost_delete_controller", 00:04:58.079 "vhost_create_blk_controller", 00:04:58.079 "vhost_scsi_controller_remove_target", 00:04:58.079 "vhost_scsi_controller_add_target", 00:04:58.079 "vhost_start_scsi_controller", 00:04:58.079 "vhost_create_scsi_controller", 00:04:58.079 "thread_set_cpumask", 00:04:58.079 "framework_get_governor", 00:04:58.079 "framework_get_scheduler", 00:04:58.079 "framework_set_scheduler", 00:04:58.079 "framework_get_reactors", 00:04:58.079 "thread_get_io_channels", 00:04:58.079 "thread_get_pollers", 00:04:58.079 "thread_get_stats", 00:04:58.079 "framework_monitor_context_switch", 00:04:58.079 "spdk_kill_instance", 00:04:58.079 "log_enable_timestamps", 00:04:58.079 "log_get_flags", 00:04:58.079 "log_clear_flag", 00:04:58.079 "log_set_flag", 00:04:58.079 "log_get_level", 00:04:58.079 "log_set_level", 00:04:58.079 "log_get_print_level", 00:04:58.079 "log_set_print_level", 00:04:58.079 "framework_enable_cpumask_locks", 00:04:58.079 "framework_disable_cpumask_locks", 00:04:58.079 "framework_wait_init", 00:04:58.079 "framework_start_init", 00:04:58.079 "scsi_get_devices", 00:04:58.079 "bdev_get_histogram", 00:04:58.079 "bdev_enable_histogram", 00:04:58.079 "bdev_set_qos_limit", 00:04:58.079 "bdev_set_qd_sampling_period", 00:04:58.079 "bdev_get_bdevs", 00:04:58.079 "bdev_reset_iostat", 00:04:58.079 "bdev_get_iostat", 00:04:58.079 "bdev_examine", 00:04:58.079 "bdev_wait_for_examine", 00:04:58.079 "bdev_set_options", 00:04:58.079 "notify_get_notifications", 00:04:58.079 "notify_get_types", 00:04:58.079 "accel_get_stats", 00:04:58.079 "accel_set_options", 00:04:58.079 "accel_set_driver", 00:04:58.079 "accel_crypto_key_destroy", 00:04:58.079 "accel_crypto_keys_get", 00:04:58.079 "accel_crypto_key_create", 00:04:58.079 "accel_assign_opc", 00:04:58.079 "accel_get_module_info", 00:04:58.079 "accel_get_opc_assignments", 00:04:58.079 "vmd_rescan", 00:04:58.079 "vmd_remove_device", 00:04:58.079 "vmd_enable", 00:04:58.079 "sock_get_default_impl", 00:04:58.079 "sock_set_default_impl", 00:04:58.079 "sock_impl_set_options", 00:04:58.079 "sock_impl_get_options", 00:04:58.079 "iobuf_get_stats", 00:04:58.079 "iobuf_set_options", 00:04:58.079 "keyring_get_keys", 00:04:58.079 "framework_get_pci_devices", 00:04:58.079 "framework_get_config", 00:04:58.079 "framework_get_subsystems", 00:04:58.079 "vfu_tgt_set_base_path", 00:04:58.079 "trace_get_info", 00:04:58.079 "trace_get_tpoint_group_mask", 00:04:58.079 "trace_disable_tpoint_group", 00:04:58.079 "trace_enable_tpoint_group", 00:04:58.079 "trace_clear_tpoint_mask", 00:04:58.079 "trace_set_tpoint_mask", 00:04:58.079 "spdk_get_version", 00:04:58.079 "rpc_get_methods" 00:04:58.079 ] 00:04:58.079 05:51:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:58.079 05:51:13 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.079 05:51:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.337 05:51:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:58.337 05:51:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60680 00:04:58.337 05:51:14 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 60680 ']' 00:04:58.337 05:51:14 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 60680 00:04:58.337 05:51:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:58.337 05:51:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.337 05:51:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60680 00:04:58.337 killing process with pid 60680 00:04:58.337 05:51:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.337 05:51:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.337 05:51:14 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60680' 00:04:58.337 05:51:14 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 60680 00:04:58.337 05:51:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 60680 00:05:00.241 ************************************ 00:05:00.241 END TEST spdkcli_tcp 00:05:00.241 ************************************ 00:05:00.241 00:05:00.241 real 0m3.400s 00:05:00.241 user 0m6.104s 00:05:00.241 sys 0m0.470s 00:05:00.241 05:51:15 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.241 05:51:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.241 05:51:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:00.241 05:51:16 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.241 05:51:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.241 05:51:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.241 05:51:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.241 ************************************ 00:05:00.241 START TEST dpdk_mem_utility 00:05:00.241 ************************************ 00:05:00.241 05:51:16 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.241 * Looking for test storage... 00:05:00.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:00.241 05:51:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:00.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.241 05:51:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60789 00:05:00.241 05:51:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60789 00:05:00.241 05:51:16 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 60789 ']' 00:05:00.241 05:51:16 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.241 05:51:16 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.241 05:51:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.241 05:51:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.241 05:51:16 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.241 05:51:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.501 [2024-07-11 05:51:16.227541] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:00.501 [2024-07-11 05:51:16.227746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60789 ] 00:05:00.501 [2024-07-11 05:51:16.400604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.760 [2024-07-11 05:51:16.567104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.019 [2024-07-11 05:51:16.744342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:01.279 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.279 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:01.279 05:51:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:01.279 05:51:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:01.279 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.279 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.279 { 00:05:01.279 "filename": "/tmp/spdk_mem_dump.txt" 00:05:01.279 } 00:05:01.279 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.279 05:51:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:01.540 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:01.540 1 heaps totaling size 820.000000 MiB 00:05:01.540 size: 820.000000 MiB heap id: 0 00:05:01.540 end heaps---------- 00:05:01.540 8 mempools totaling size 598.116089 MiB 00:05:01.540 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:01.540 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:01.540 size: 84.521057 MiB name: bdev_io_60789 00:05:01.540 size: 51.011292 MiB name: evtpool_60789 00:05:01.540 size: 50.003479 MiB name: msgpool_60789 00:05:01.540 size: 21.763794 MiB name: PDU_Pool 00:05:01.540 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:01.540 size: 0.026123 MiB name: Session_Pool 00:05:01.540 end mempools------- 00:05:01.540 6 memzones totaling size 4.142822 MiB 00:05:01.540 size: 1.000366 MiB name: RG_ring_0_60789 00:05:01.540 size: 1.000366 MiB name: RG_ring_1_60789 00:05:01.540 size: 1.000366 MiB name: RG_ring_4_60789 00:05:01.540 size: 1.000366 MiB name: RG_ring_5_60789 00:05:01.540 size: 0.125366 MiB name: RG_ring_2_60789 00:05:01.540 size: 0.015991 MiB name: RG_ring_3_60789 00:05:01.540 end memzones------- 00:05:01.540 05:51:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:01.540 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:05:01.540 list of free elements. size: 18.451538 MiB 00:05:01.540 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:01.540 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:01.540 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:01.540 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:01.540 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:01.540 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:01.540 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:01.540 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:01.540 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:01.540 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:01.540 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:01.540 element at address: 0x200000200000 with size: 0.829956 MiB 00:05:01.540 element at address: 0x20001b000000 with size: 0.564148 MiB 00:05:01.540 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:01.540 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:01.540 element at address: 0x200013800000 with size: 0.467896 MiB 00:05:01.540 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:01.540 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:01.540 list of standard malloc elements. size: 199.284058 MiB 00:05:01.540 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:01.540 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:01.540 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:01.540 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:01.540 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:01.540 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:01.540 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:01.540 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:01.540 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:01.540 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:01.540 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:01.540 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:01.540 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:01.541 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:01.541 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:01.542 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:01.542 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:01.542 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:01.542 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:01.542 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:01.543 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:01.543 list of memzone associated elements. size: 602.264404 MiB 00:05:01.543 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:01.543 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:01.543 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:01.543 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:01.543 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:01.543 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60789_0 00:05:01.543 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:01.543 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60789_0 00:05:01.543 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:01.543 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60789_0 00:05:01.543 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:01.543 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:01.543 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:01.543 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:01.543 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:01.543 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60789 00:05:01.543 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:01.543 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60789 00:05:01.543 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:01.543 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60789 00:05:01.543 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:01.543 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:01.543 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:01.543 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:01.544 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:01.544 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:01.544 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:01.544 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:01.544 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:01.544 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60789 00:05:01.544 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:01.544 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60789 00:05:01.544 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:01.544 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60789 00:05:01.544 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:01.544 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60789 00:05:01.544 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:01.544 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60789 00:05:01.544 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:01.544 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:01.544 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:01.544 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:01.544 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:01.544 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:01.544 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:01.544 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60789 00:05:01.544 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:01.544 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:01.544 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:01.544 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:01.544 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:01.544 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60789 00:05:01.544 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:01.544 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:01.544 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:01.544 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60789 00:05:01.544 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:01.544 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60789 00:05:01.544 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:01.544 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:01.544 05:51:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:01.544 05:51:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60789 00:05:01.544 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 60789 ']' 00:05:01.544 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 60789 00:05:01.544 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:01.544 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.544 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60789 00:05:01.544 killing process with pid 60789 00:05:01.544 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.544 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.544 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60789' 00:05:01.544 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 60789 00:05:01.544 05:51:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 60789 00:05:03.450 00:05:03.450 real 0m3.029s 00:05:03.451 user 0m3.087s 00:05:03.451 sys 0m0.439s 00:05:03.451 05:51:19 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.451 ************************************ 00:05:03.451 END TEST dpdk_mem_utility 00:05:03.451 ************************************ 00:05:03.451 05:51:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.451 05:51:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:03.451 05:51:19 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.451 05:51:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.451 05:51:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.451 05:51:19 -- common/autotest_common.sh@10 -- # set +x 00:05:03.451 ************************************ 00:05:03.451 START TEST event 00:05:03.451 ************************************ 00:05:03.451 05:51:19 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.451 * Looking for test storage... 00:05:03.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:03.451 05:51:19 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:03.451 05:51:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:03.451 05:51:19 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.451 05:51:19 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:03.451 05:51:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.451 05:51:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.451 ************************************ 00:05:03.451 START TEST event_perf 00:05:03.451 ************************************ 00:05:03.451 05:51:19 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.451 Running I/O for 1 seconds...[2024-07-11 05:51:19.240144] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:03.451 [2024-07-11 05:51:19.240461] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60878 ] 00:05:03.711 [2024-07-11 05:51:19.406211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.711 [2024-07-11 05:51:19.552261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.711 [2024-07-11 05:51:19.552352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.711 [2024-07-11 05:51:19.552468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.711 Running I/O for 1 seconds...[2024-07-11 05:51:19.552482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.087 00:05:05.087 lcore 0: 172710 00:05:05.087 lcore 1: 172709 00:05:05.087 lcore 2: 172710 00:05:05.087 lcore 3: 172709 00:05:05.087 done. 00:05:05.087 00:05:05.087 real 0m1.731s 00:05:05.087 user 0m4.478s 00:05:05.087 sys 0m0.124s 00:05:05.087 05:51:20 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.087 05:51:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.087 ************************************ 00:05:05.087 END TEST event_perf 00:05:05.087 ************************************ 00:05:05.087 05:51:20 event -- common/autotest_common.sh@1142 -- # return 0 00:05:05.087 05:51:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.087 05:51:20 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:05.087 05:51:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.087 05:51:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.087 ************************************ 00:05:05.087 START TEST event_reactor 00:05:05.087 ************************************ 00:05:05.087 05:51:20 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.347 [2024-07-11 05:51:21.017430] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:05.347 [2024-07-11 05:51:21.018039] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60917 ] 00:05:05.347 [2024-07-11 05:51:21.185155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.604 [2024-07-11 05:51:21.361285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.978 test_start 00:05:06.978 oneshot 00:05:06.978 tick 100 00:05:06.978 tick 100 00:05:06.978 tick 250 00:05:06.978 tick 100 00:05:06.978 tick 100 00:05:06.978 tick 100 00:05:06.978 tick 250 00:05:06.978 tick 500 00:05:06.978 tick 100 00:05:06.978 tick 100 00:05:06.978 tick 250 00:05:06.978 tick 100 00:05:06.978 tick 100 00:05:06.978 test_end 00:05:06.978 00:05:06.978 real 0m1.741s 00:05:06.978 user 0m1.534s 00:05:06.978 sys 0m0.096s 00:05:06.978 ************************************ 00:05:06.978 END TEST event_reactor 00:05:06.978 ************************************ 00:05:06.978 05:51:22 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.978 05:51:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:06.978 05:51:22 event -- common/autotest_common.sh@1142 -- # return 0 00:05:06.978 05:51:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.978 05:51:22 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:06.978 05:51:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.978 05:51:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.978 ************************************ 00:05:06.978 START TEST event_reactor_perf 00:05:06.978 ************************************ 00:05:06.978 05:51:22 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.978 [2024-07-11 05:51:22.808954] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:06.978 [2024-07-11 05:51:22.809114] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60954 ] 00:05:07.237 [2024-07-11 05:51:22.979092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.495 [2024-07-11 05:51:23.164565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.876 test_start 00:05:08.876 test_end 00:05:08.876 Performance: 295860 events per second 00:05:08.876 00:05:08.876 real 0m1.748s 00:05:08.876 user 0m1.542s 00:05:08.876 sys 0m0.096s 00:05:08.876 05:51:24 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.876 ************************************ 00:05:08.876 END TEST event_reactor_perf 00:05:08.876 ************************************ 00:05:08.876 05:51:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.876 05:51:24 event -- common/autotest_common.sh@1142 -- # return 0 00:05:08.876 05:51:24 event -- event/event.sh@49 -- # uname -s 00:05:08.876 05:51:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.876 05:51:24 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.876 05:51:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.876 05:51:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.876 05:51:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.876 ************************************ 00:05:08.876 START TEST event_scheduler 00:05:08.876 ************************************ 00:05:08.876 05:51:24 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.876 * Looking for test storage... 00:05:08.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:08.876 05:51:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.876 05:51:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61022 00:05:08.876 05:51:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.876 05:51:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.876 05:51:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61022 00:05:08.876 05:51:24 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 61022 ']' 00:05:08.876 05:51:24 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.876 05:51:24 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.876 05:51:24 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.876 05:51:24 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.876 05:51:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.876 [2024-07-11 05:51:24.788303] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:08.876 [2024-07-11 05:51:24.788489] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61022 ] 00:05:09.135 [2024-07-11 05:51:24.961204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.393 [2024-07-11 05:51:25.182099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.393 [2024-07-11 05:51:25.182232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.393 [2024-07-11 05:51:25.182811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.393 [2024-07-11 05:51:25.183231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.959 05:51:25 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.959 05:51:25 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:09.959 05:51:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.959 05:51:25 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.959 05:51:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.959 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.959 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.959 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.959 POWER: Cannot set governor of lcore 0 to performance 00:05:09.959 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.959 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.959 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.959 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.959 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:09.960 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:09.960 POWER: Unable to set Power Management Environment for lcore 0 00:05:09.960 [2024-07-11 05:51:25.700868] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:09.960 [2024-07-11 05:51:25.700905] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:09.960 [2024-07-11 05:51:25.701321] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.960 [2024-07-11 05:51:25.701374] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.960 [2024-07-11 05:51:25.701393] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.960 [2024-07-11 05:51:25.701417] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.960 05:51:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.960 05:51:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.960 05:51:25 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.960 05:51:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.960 [2024-07-11 05:51:25.872380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.218 [2024-07-11 05:51:25.957447] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:10.218 05:51:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.218 05:51:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:10.218 05:51:25 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.218 05:51:25 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.218 05:51:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.218 ************************************ 00:05:10.218 START TEST scheduler_create_thread 00:05:10.218 ************************************ 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.218 2 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.218 3 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.218 05:51:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.218 4 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.218 5 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.218 6 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.218 7 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.218 8 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.218 9 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.218 05:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.219 10 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.219 05:51:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.122 05:51:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.122 05:51:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:12.122 05:51:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:12.122 05:51:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.122 05:51:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.690 ************************************ 00:05:12.690 END TEST scheduler_create_thread 00:05:12.690 ************************************ 00:05:12.690 05:51:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.690 00:05:12.690 real 0m2.616s 00:05:12.690 user 0m0.017s 00:05:12.690 sys 0m0.007s 00:05:12.690 05:51:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.690 05:51:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:12.948 05:51:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:12.948 05:51:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61022 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 61022 ']' 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 61022 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61022 00:05:12.948 killing process with pid 61022 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61022' 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 61022 00:05:12.948 05:51:28 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 61022 00:05:13.207 [2024-07-11 05:51:29.065900] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:14.143 00:05:14.143 real 0m5.435s 00:05:14.143 user 0m9.217s 00:05:14.143 sys 0m0.432s 00:05:14.143 05:51:30 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.143 05:51:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.143 ************************************ 00:05:14.143 END TEST event_scheduler 00:05:14.143 ************************************ 00:05:14.143 05:51:30 event -- common/autotest_common.sh@1142 -- # return 0 00:05:14.143 05:51:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:14.143 05:51:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:14.143 05:51:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.143 05:51:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.143 05:51:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.401 ************************************ 00:05:14.401 START TEST app_repeat 00:05:14.401 ************************************ 00:05:14.401 05:51:30 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61128 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61128' 00:05:14.401 Process app_repeat pid: 61128 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.401 spdk_app_start Round 0 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:14.401 05:51:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61128 /var/tmp/spdk-nbd.sock 00:05:14.401 05:51:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61128 ']' 00:05:14.401 05:51:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.401 05:51:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.401 05:51:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.401 05:51:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.401 05:51:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.401 [2024-07-11 05:51:30.123713] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:14.401 [2024-07-11 05:51:30.123878] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61128 ] 00:05:14.401 [2024-07-11 05:51:30.295620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.660 [2024-07-11 05:51:30.478074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.660 [2024-07-11 05:51:30.478086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.919 [2024-07-11 05:51:30.622401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:15.178 05:51:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.178 05:51:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:15.178 05:51:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.436 Malloc0 00:05:15.695 05:51:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.695 Malloc1 00:05:15.695 05:51:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.695 05:51:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.954 /dev/nbd0 00:05:15.954 05:51:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.954 05:51:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.954 1+0 records in 00:05:15.954 1+0 records out 00:05:15.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312084 s, 13.1 MB/s 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:15.954 05:51:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:15.954 05:51:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.954 05:51:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.954 05:51:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.213 /dev/nbd1 00:05:16.213 05:51:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.213 05:51:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.213 1+0 records in 00:05:16.213 1+0 records out 00:05:16.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387171 s, 10.6 MB/s 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.213 05:51:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:16.213 05:51:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.213 05:51:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.213 05:51:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.213 05:51:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.213 05:51:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.781 { 00:05:16.781 "nbd_device": "/dev/nbd0", 00:05:16.781 "bdev_name": "Malloc0" 00:05:16.781 }, 00:05:16.781 { 00:05:16.781 "nbd_device": "/dev/nbd1", 00:05:16.781 "bdev_name": "Malloc1" 00:05:16.781 } 00:05:16.781 ]' 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.781 { 00:05:16.781 "nbd_device": "/dev/nbd0", 00:05:16.781 "bdev_name": "Malloc0" 00:05:16.781 }, 00:05:16.781 { 00:05:16.781 "nbd_device": "/dev/nbd1", 00:05:16.781 "bdev_name": "Malloc1" 00:05:16.781 } 00:05:16.781 ]' 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.781 /dev/nbd1' 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.781 /dev/nbd1' 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.781 256+0 records in 00:05:16.781 256+0 records out 00:05:16.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00836517 s, 125 MB/s 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.781 256+0 records in 00:05:16.781 256+0 records out 00:05:16.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0421107 s, 24.9 MB/s 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.781 256+0 records in 00:05:16.781 256+0 records out 00:05:16.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030322 s, 34.6 MB/s 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.781 05:51:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.040 05:51:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.040 05:51:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.040 05:51:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.040 05:51:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.040 05:51:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.040 05:51:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.040 05:51:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.040 05:51:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.040 05:51:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.040 05:51:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.299 05:51:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.557 05:51:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.557 05:51:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.126 05:51:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.060 [2024-07-11 05:51:34.781030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.060 [2024-07-11 05:51:34.923137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.060 [2024-07-11 05:51:34.923141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.318 [2024-07-11 05:51:35.063450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:19.318 [2024-07-11 05:51:35.063596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.318 [2024-07-11 05:51:35.063621] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.218 05:51:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.218 spdk_app_start Round 1 00:05:21.218 05:51:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:21.218 05:51:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61128 /var/tmp/spdk-nbd.sock 00:05:21.218 05:51:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61128 ']' 00:05:21.218 05:51:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.218 05:51:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.218 05:51:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.218 05:51:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.218 05:51:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.218 05:51:37 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.218 05:51:37 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:21.218 05:51:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.477 Malloc0 00:05:21.477 05:51:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.044 Malloc1 00:05:22.044 05:51:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.044 /dev/nbd0 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.044 1+0 records in 00:05:22.044 1+0 records out 00:05:22.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317288 s, 12.9 MB/s 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.044 05:51:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.044 05:51:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.302 /dev/nbd1 00:05:22.302 05:51:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.302 05:51:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.302 1+0 records in 00:05:22.302 1+0 records out 00:05:22.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384649 s, 10.6 MB/s 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:22.302 05:51:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:22.302 05:51:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.302 05:51:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.302 05:51:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.302 05:51:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.302 05:51:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.561 05:51:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.561 { 00:05:22.561 "nbd_device": "/dev/nbd0", 00:05:22.561 "bdev_name": "Malloc0" 00:05:22.561 }, 00:05:22.561 { 00:05:22.561 "nbd_device": "/dev/nbd1", 00:05:22.561 "bdev_name": "Malloc1" 00:05:22.561 } 00:05:22.561 ]' 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.820 { 00:05:22.820 "nbd_device": "/dev/nbd0", 00:05:22.820 "bdev_name": "Malloc0" 00:05:22.820 }, 00:05:22.820 { 00:05:22.820 "nbd_device": "/dev/nbd1", 00:05:22.820 "bdev_name": "Malloc1" 00:05:22.820 } 00:05:22.820 ]' 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.820 /dev/nbd1' 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.820 /dev/nbd1' 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.820 256+0 records in 00:05:22.820 256+0 records out 00:05:22.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457462 s, 229 MB/s 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.820 256+0 records in 00:05:22.820 256+0 records out 00:05:22.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030228 s, 34.7 MB/s 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.820 256+0 records in 00:05:22.820 256+0 records out 00:05:22.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0361701 s, 29.0 MB/s 00:05:22.820 05:51:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.821 05:51:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.079 05:51:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.079 05:51:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.079 05:51:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.079 05:51:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.079 05:51:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.079 05:51:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.079 05:51:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.079 05:51:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.079 05:51:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.079 05:51:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.339 05:51:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.597 05:51:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.597 05:51:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.191 05:51:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.129 [2024-07-11 05:51:40.922534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.388 [2024-07-11 05:51:41.095609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.388 [2024-07-11 05:51:41.095614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.388 [2024-07-11 05:51:41.245023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:25.388 [2024-07-11 05:51:41.245158] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.388 [2024-07-11 05:51:41.245177] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.290 05:51:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.290 spdk_app_start Round 2 00:05:27.290 05:51:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:27.290 05:51:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61128 /var/tmp/spdk-nbd.sock 00:05:27.290 05:51:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61128 ']' 00:05:27.290 05:51:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.290 05:51:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.290 05:51:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.290 05:51:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.290 05:51:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.290 05:51:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.290 05:51:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:27.290 05:51:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.858 Malloc0 00:05:27.858 05:51:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.118 Malloc1 00:05:28.118 05:51:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.118 05:51:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.378 /dev/nbd0 00:05:28.378 05:51:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.378 05:51:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.378 1+0 records in 00:05:28.378 1+0 records out 00:05:28.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292931 s, 14.0 MB/s 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:28.378 05:51:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:28.378 05:51:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.378 05:51:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.378 05:51:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.378 /dev/nbd1 00:05:28.637 05:51:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.637 05:51:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.637 1+0 records in 00:05:28.637 1+0 records out 00:05:28.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423585 s, 9.7 MB/s 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:28.637 05:51:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:28.637 05:51:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.637 05:51:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.637 05:51:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.637 05:51:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.637 05:51:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.896 { 00:05:28.896 "nbd_device": "/dev/nbd0", 00:05:28.896 "bdev_name": "Malloc0" 00:05:28.896 }, 00:05:28.896 { 00:05:28.896 "nbd_device": "/dev/nbd1", 00:05:28.896 "bdev_name": "Malloc1" 00:05:28.896 } 00:05:28.896 ]' 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.896 { 00:05:28.896 "nbd_device": "/dev/nbd0", 00:05:28.896 "bdev_name": "Malloc0" 00:05:28.896 }, 00:05:28.896 { 00:05:28.896 "nbd_device": "/dev/nbd1", 00:05:28.896 "bdev_name": "Malloc1" 00:05:28.896 } 00:05:28.896 ]' 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.896 /dev/nbd1' 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.896 /dev/nbd1' 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.896 256+0 records in 00:05:28.896 256+0 records out 00:05:28.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00926475 s, 113 MB/s 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.896 256+0 records in 00:05:28.896 256+0 records out 00:05:28.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243442 s, 43.1 MB/s 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.896 256+0 records in 00:05:28.896 256+0 records out 00:05:28.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278561 s, 37.6 MB/s 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.896 05:51:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.155 05:51:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.155 05:51:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.155 05:51:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.155 05:51:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.155 05:51:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.155 05:51:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.155 05:51:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.155 05:51:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.155 05:51:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.155 05:51:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.414 05:51:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.673 05:51:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.673 05:51:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.241 05:51:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.187 [2024-07-11 05:51:46.918629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.187 [2024-07-11 05:51:47.060986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.187 [2024-07-11 05:51:47.060989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.444 [2024-07-11 05:51:47.204776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:31.444 [2024-07-11 05:51:47.204883] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.444 [2024-07-11 05:51:47.204905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.346 05:51:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61128 /var/tmp/spdk-nbd.sock 00:05:33.346 05:51:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61128 ']' 00:05:33.346 05:51:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.346 05:51:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.346 05:51:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.346 05:51:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.346 05:51:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:33.346 05:51:49 event.app_repeat -- event/event.sh@39 -- # killprocess 61128 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 61128 ']' 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 61128 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61128 00:05:33.346 killing process with pid 61128 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61128' 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@967 -- # kill 61128 00:05:33.346 05:51:49 event.app_repeat -- common/autotest_common.sh@972 -- # wait 61128 00:05:34.282 spdk_app_start is called in Round 0. 00:05:34.282 Shutdown signal received, stop current app iteration 00:05:34.282 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 reinitialization... 00:05:34.282 spdk_app_start is called in Round 1. 00:05:34.282 Shutdown signal received, stop current app iteration 00:05:34.282 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 reinitialization... 00:05:34.282 spdk_app_start is called in Round 2. 00:05:34.282 Shutdown signal received, stop current app iteration 00:05:34.282 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 reinitialization... 00:05:34.282 spdk_app_start is called in Round 3. 00:05:34.282 Shutdown signal received, stop current app iteration 00:05:34.282 05:51:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:34.282 05:51:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:34.282 00:05:34.282 real 0m20.066s 00:05:34.282 user 0m43.589s 00:05:34.282 sys 0m2.542s 00:05:34.282 05:51:50 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.282 05:51:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.282 ************************************ 00:05:34.282 END TEST app_repeat 00:05:34.282 ************************************ 00:05:34.282 05:51:50 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.282 05:51:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:34.282 05:51:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:34.282 05:51:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.282 05:51:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.282 05:51:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.282 ************************************ 00:05:34.282 START TEST cpu_locks 00:05:34.282 ************************************ 00:05:34.282 05:51:50 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:34.540 * Looking for test storage... 00:05:34.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:34.540 05:51:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:34.540 05:51:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:34.540 05:51:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:34.540 05:51:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:34.540 05:51:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.540 05:51:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.540 05:51:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.540 ************************************ 00:05:34.540 START TEST default_locks 00:05:34.540 ************************************ 00:05:34.540 05:51:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:34.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.540 05:51:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61578 00:05:34.540 05:51:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61578 00:05:34.540 05:51:50 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61578 ']' 00:05:34.540 05:51:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.540 05:51:50 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.540 05:51:50 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.540 05:51:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.540 05:51:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.540 05:51:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.540 [2024-07-11 05:51:50.396333] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:34.540 [2024-07-11 05:51:50.396511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61578 ] 00:05:34.799 [2024-07-11 05:51:50.561340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.058 [2024-07-11 05:51:50.725602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.058 [2024-07-11 05:51:50.869839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:35.626 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.626 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:35.626 05:51:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61578 00:05:35.626 05:51:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61578 00:05:35.626 05:51:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61578 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 61578 ']' 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 61578 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61578 00:05:35.885 killing process with pid 61578 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61578' 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 61578 00:05:35.885 05:51:51 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 61578 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61578 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61578 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:37.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 61578 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61578 ']' 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.788 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61578) - No such process 00:05:37.788 ERROR: process (pid: 61578) is no longer running 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.788 00:05:37.788 real 0m3.172s 00:05:37.788 user 0m3.248s 00:05:37.788 sys 0m0.545s 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.788 05:51:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.788 ************************************ 00:05:37.788 END TEST default_locks 00:05:37.788 ************************************ 00:05:37.788 05:51:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:37.788 05:51:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:37.788 05:51:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.788 05:51:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.788 05:51:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.788 ************************************ 00:05:37.788 START TEST default_locks_via_rpc 00:05:37.788 ************************************ 00:05:37.788 05:51:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:37.788 05:51:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61642 00:05:37.788 05:51:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61642 00:05:37.788 05:51:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.788 05:51:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61642 ']' 00:05:37.788 05:51:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.788 05:51:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.788 05:51:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.788 05:51:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.788 05:51:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.788 [2024-07-11 05:51:53.625780] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:37.788 [2024-07-11 05:51:53.625963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61642 ] 00:05:38.045 [2024-07-11 05:51:53.795755] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.045 [2024-07-11 05:51:53.943426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.303 [2024-07-11 05:51:54.087551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61642 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61642 00:05:38.870 05:51:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61642 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 61642 ']' 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 61642 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61642 00:05:39.137 killing process with pid 61642 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61642' 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 61642 00:05:39.137 05:51:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 61642 00:05:41.084 ************************************ 00:05:41.084 END TEST default_locks_via_rpc 00:05:41.084 ************************************ 00:05:41.084 00:05:41.084 real 0m3.177s 00:05:41.084 user 0m3.234s 00:05:41.084 sys 0m0.538s 00:05:41.084 05:51:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.084 05:51:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.084 05:51:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:41.084 05:51:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:41.084 05:51:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.084 05:51:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.084 05:51:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.084 ************************************ 00:05:41.084 START TEST non_locking_app_on_locked_coremask 00:05:41.084 ************************************ 00:05:41.084 05:51:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:41.084 05:51:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61705 00:05:41.084 05:51:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.084 05:51:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61705 /var/tmp/spdk.sock 00:05:41.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.084 05:51:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61705 ']' 00:05:41.084 05:51:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.084 05:51:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.084 05:51:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.084 05:51:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.084 05:51:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.085 [2024-07-11 05:51:56.856320] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:41.085 [2024-07-11 05:51:56.856525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61705 ] 00:05:41.343 [2024-07-11 05:51:57.024267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.344 [2024-07-11 05:51:57.169550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.602 [2024-07-11 05:51:57.321916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:41.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61721 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61721 /var/tmp/spdk2.sock 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61721 ']' 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.861 05:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.120 [2024-07-11 05:51:57.853408] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:42.120 [2024-07-11 05:51:57.853828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61721 ] 00:05:42.120 [2024-07-11 05:51:58.017044] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.120 [2024-07-11 05:51:58.017113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.688 [2024-07-11 05:51:58.311212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.947 [2024-07-11 05:51:58.631138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:43.885 05:51:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.885 05:51:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:43.886 05:51:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61705 00:05:43.886 05:51:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61705 00:05:43.886 05:51:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61705 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61705 ']' 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61705 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61705 00:05:44.454 killing process with pid 61705 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61705' 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61705 00:05:44.454 05:52:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61705 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61721 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61721 ']' 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61721 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61721 00:05:48.646 killing process with pid 61721 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61721' 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61721 00:05:48.646 05:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61721 00:05:50.021 ************************************ 00:05:50.021 END TEST non_locking_app_on_locked_coremask 00:05:50.021 ************************************ 00:05:50.021 00:05:50.021 real 0m8.904s 00:05:50.021 user 0m9.263s 00:05:50.021 sys 0m1.137s 00:05:50.021 05:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.021 05:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.021 05:52:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:50.021 05:52:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:50.021 05:52:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.021 05:52:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.021 05:52:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.021 ************************************ 00:05:50.021 START TEST locking_app_on_unlocked_coremask 00:05:50.021 ************************************ 00:05:50.021 05:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:50.021 05:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61842 00:05:50.021 05:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61842 /var/tmp/spdk.sock 00:05:50.021 05:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61842 ']' 00:05:50.021 05:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.021 05:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:50.021 05:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.021 05:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.021 05:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.021 05:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.021 [2024-07-11 05:52:05.814289] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:50.021 [2024-07-11 05:52:05.815005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61842 ] 00:05:50.281 [2024-07-11 05:52:05.987494] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.281 [2024-07-11 05:52:05.987809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.281 [2024-07-11 05:52:06.196238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.540 [2024-07-11 05:52:06.370991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61858 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61858 /var/tmp/spdk2.sock 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61858 ']' 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.108 05:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.108 [2024-07-11 05:52:06.977072] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:51.108 [2024-07-11 05:52:06.977498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61858 ] 00:05:51.367 [2024-07-11 05:52:07.142472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.626 [2024-07-11 05:52:07.443669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.885 [2024-07-11 05:52:07.751099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.821 05:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.821 05:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:52.821 05:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61858 00:05:52.821 05:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61858 00:05:52.821 05:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61842 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61842 ']' 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61842 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61842 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61842' 00:05:53.759 killing process with pid 61842 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61842 00:05:53.759 05:52:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61842 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61858 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61858 ']' 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61858 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61858 00:05:57.949 killing process with pid 61858 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61858' 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61858 00:05:57.949 05:52:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61858 00:05:58.954 ************************************ 00:05:58.954 END TEST locking_app_on_unlocked_coremask 00:05:58.954 ************************************ 00:05:58.954 00:05:58.954 real 0m9.171s 00:05:58.954 user 0m9.704s 00:05:58.954 sys 0m1.155s 00:05:58.954 05:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.954 05:52:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.212 05:52:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.212 05:52:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:59.212 05:52:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.212 05:52:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.212 05:52:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.212 ************************************ 00:05:59.212 START TEST locking_app_on_locked_coremask 00:05:59.212 ************************************ 00:05:59.212 05:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:59.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.212 05:52:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61984 00:05:59.212 05:52:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61984 /var/tmp/spdk.sock 00:05:59.212 05:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61984 ']' 00:05:59.212 05:52:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.212 05:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.212 05:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.212 05:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.212 05:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.212 05:52:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.212 [2024-07-11 05:52:15.040879] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:05:59.212 [2024-07-11 05:52:15.041040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61984 ] 00:05:59.471 [2024-07-11 05:52:15.211451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.471 [2024-07-11 05:52:15.368437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.730 [2024-07-11 05:52:15.525979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.297 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.297 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:00.297 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62001 00:06:00.297 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62001 /var/tmp/spdk2.sock 00:06:00.297 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.297 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62001 /var/tmp/spdk2.sock 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62001 /var/tmp/spdk2.sock 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62001 ']' 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.298 05:52:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.298 [2024-07-11 05:52:16.094309] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:00.298 [2024-07-11 05:52:16.094818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62001 ] 00:06:00.556 [2024-07-11 05:52:16.271419] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61984 has claimed it. 00:06:00.556 [2024-07-11 05:52:16.271677] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.815 ERROR: process (pid: 62001) is no longer running 00:06:00.815 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62001) - No such process 00:06:00.815 05:52:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.815 05:52:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:00.815 05:52:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:00.815 05:52:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.815 05:52:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.815 05:52:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.815 05:52:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61984 00:06:01.074 05:52:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61984 00:06:01.074 05:52:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61984 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61984 ']' 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61984 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61984 00:06:01.333 killing process with pid 61984 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61984' 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61984 00:06:01.333 05:52:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61984 00:06:03.238 ************************************ 00:06:03.238 END TEST locking_app_on_locked_coremask 00:06:03.238 ************************************ 00:06:03.238 00:06:03.238 real 0m4.017s 00:06:03.238 user 0m4.357s 00:06:03.238 sys 0m0.698s 00:06:03.238 05:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.238 05:52:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.238 05:52:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:03.238 05:52:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:03.238 05:52:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.238 05:52:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.238 05:52:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.238 ************************************ 00:06:03.238 START TEST locking_overlapped_coremask 00:06:03.238 ************************************ 00:06:03.238 05:52:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:03.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.238 05:52:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62066 00:06:03.238 05:52:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62066 /var/tmp/spdk.sock 00:06:03.238 05:52:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:03.238 05:52:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62066 ']' 00:06:03.238 05:52:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.238 05:52:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.238 05:52:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.238 05:52:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.238 05:52:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.238 [2024-07-11 05:52:19.114600] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:03.238 [2024-07-11 05:52:19.114802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62066 ] 00:06:03.497 [2024-07-11 05:52:19.285429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.756 [2024-07-11 05:52:19.450717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.756 [2024-07-11 05:52:19.450816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.756 [2024-07-11 05:52:19.450827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.756 [2024-07-11 05:52:19.606847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62084 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62084 /var/tmp/spdk2.sock 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62084 /var/tmp/spdk2.sock 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62084 /var/tmp/spdk2.sock 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62084 ']' 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.324 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.325 [2024-07-11 05:52:20.206902] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:04.325 [2024-07-11 05:52:20.207085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62084 ] 00:06:04.583 [2024-07-11 05:52:20.385071] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62066 has claimed it. 00:06:04.583 [2024-07-11 05:52:20.385177] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.150 ERROR: process (pid: 62084) is no longer running 00:06:05.151 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62084) - No such process 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62066 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 62066 ']' 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 62066 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62066 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62066' 00:06:05.151 killing process with pid 62066 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 62066 00:06:05.151 05:52:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 62066 00:06:07.055 00:06:07.055 real 0m3.770s 00:06:07.055 user 0m9.952s 00:06:07.055 sys 0m0.537s 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.055 ************************************ 00:06:07.055 END TEST locking_overlapped_coremask 00:06:07.055 ************************************ 00:06:07.055 05:52:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:07.055 05:52:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.055 05:52:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.055 05:52:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.055 05:52:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.055 ************************************ 00:06:07.055 START TEST locking_overlapped_coremask_via_rpc 00:06:07.055 ************************************ 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62148 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62148 /var/tmp/spdk.sock 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62148 ']' 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.055 05:52:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.055 [2024-07-11 05:52:22.932163] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:07.055 [2024-07-11 05:52:22.932328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62148 ] 00:06:07.313 [2024-07-11 05:52:23.099550] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.313 [2024-07-11 05:52:23.099623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.571 [2024-07-11 05:52:23.249894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.571 [2024-07-11 05:52:23.249993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.571 [2024-07-11 05:52:23.250011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.571 [2024-07-11 05:52:23.410904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62166 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62166 /var/tmp/spdk2.sock 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62166 ']' 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.137 05:52:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.137 [2024-07-11 05:52:23.988542] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:08.137 [2024-07-11 05:52:23.988981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62166 ] 00:06:08.396 [2024-07-11 05:52:24.153273] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.396 [2024-07-11 05:52:24.153348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.654 [2024-07-11 05:52:24.496797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.654 [2024-07-11 05:52:24.499775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.654 [2024-07-11 05:52:24.499783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:09.222 [2024-07-11 05:52:24.836175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.158 [2024-07-11 05:52:25.834884] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62148 has claimed it. 00:06:10.158 request: 00:06:10.158 { 00:06:10.158 "method": "framework_enable_cpumask_locks", 00:06:10.158 "req_id": 1 00:06:10.158 } 00:06:10.158 Got JSON-RPC error response 00:06:10.158 response: 00:06:10.158 { 00:06:10.158 "code": -32603, 00:06:10.158 "message": "Failed to claim CPU core: 2" 00:06:10.158 } 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62148 /var/tmp/spdk.sock 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62148 ']' 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.158 05:52:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.417 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.417 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.417 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62166 /var/tmp/spdk2.sock 00:06:10.417 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62166 ']' 00:06:10.417 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.417 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.417 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.417 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.417 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.676 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.676 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.676 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.676 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.676 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.676 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.676 ************************************ 00:06:10.676 END TEST locking_overlapped_coremask_via_rpc 00:06:10.676 ************************************ 00:06:10.676 00:06:10.676 real 0m3.605s 00:06:10.676 user 0m1.391s 00:06:10.676 sys 0m0.177s 00:06:10.676 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.676 05:52:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.676 05:52:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.676 05:52:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.676 05:52:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62148 ]] 00:06:10.676 05:52:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62148 00:06:10.676 05:52:26 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62148 ']' 00:06:10.676 05:52:26 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62148 00:06:10.676 05:52:26 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:10.676 05:52:26 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.676 05:52:26 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62148 00:06:10.676 killing process with pid 62148 00:06:10.676 05:52:26 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.676 05:52:26 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.677 05:52:26 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62148' 00:06:10.677 05:52:26 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 62148 00:06:10.677 05:52:26 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 62148 00:06:12.584 05:52:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62166 ]] 00:06:12.584 05:52:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62166 00:06:12.584 05:52:28 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62166 ']' 00:06:12.584 05:52:28 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62166 00:06:12.584 05:52:28 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:12.584 05:52:28 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.584 05:52:28 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62166 00:06:12.584 killing process with pid 62166 00:06:12.584 05:52:28 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:12.584 05:52:28 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:12.584 05:52:28 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62166' 00:06:12.584 05:52:28 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 62166 00:06:12.584 05:52:28 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 62166 00:06:14.526 05:52:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.526 05:52:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.526 05:52:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62148 ]] 00:06:14.526 05:52:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62148 00:06:14.526 Process with pid 62148 is not found 00:06:14.526 Process with pid 62166 is not found 00:06:14.526 05:52:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62148 ']' 00:06:14.526 05:52:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62148 00:06:14.526 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (62148) - No such process 00:06:14.526 05:52:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 62148 is not found' 00:06:14.526 05:52:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62166 ]] 00:06:14.526 05:52:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62166 00:06:14.526 05:52:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62166 ']' 00:06:14.526 05:52:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62166 00:06:14.526 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (62166) - No such process 00:06:14.526 05:52:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 62166 is not found' 00:06:14.526 05:52:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.526 00:06:14.526 real 0m40.195s 00:06:14.526 user 1m9.122s 00:06:14.526 sys 0m5.695s 00:06:14.526 05:52:30 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.526 05:52:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.526 ************************************ 00:06:14.526 END TEST cpu_locks 00:06:14.526 ************************************ 00:06:14.526 05:52:30 event -- common/autotest_common.sh@1142 -- # return 0 00:06:14.526 ************************************ 00:06:14.526 END TEST event 00:06:14.526 ************************************ 00:06:14.526 00:06:14.526 real 1m11.322s 00:06:14.526 user 2m9.622s 00:06:14.526 sys 0m9.211s 00:06:14.526 05:52:30 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.526 05:52:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.784 05:52:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.784 05:52:30 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.784 05:52:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.784 05:52:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.784 05:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:14.784 ************************************ 00:06:14.784 START TEST thread 00:06:14.784 ************************************ 00:06:14.784 05:52:30 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.784 * Looking for test storage... 00:06:14.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:14.784 05:52:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.784 05:52:30 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:14.784 05:52:30 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.784 05:52:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.784 ************************************ 00:06:14.784 START TEST thread_poller_perf 00:06:14.784 ************************************ 00:06:14.784 05:52:30 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.784 [2024-07-11 05:52:30.601551] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:14.784 [2024-07-11 05:52:30.601746] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62336 ] 00:06:15.042 [2024-07-11 05:52:30.772800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.301 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:15.301 [2024-07-11 05:52:30.993213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.675 ====================================== 00:06:16.675 busy:2212683798 (cyc) 00:06:16.675 total_run_count: 325000 00:06:16.675 tsc_hz: 2200000000 (cyc) 00:06:16.675 ====================================== 00:06:16.675 poller_cost: 6808 (cyc), 3094 (nsec) 00:06:16.675 00:06:16.675 real 0m1.807s 00:06:16.675 user 0m1.605s 00:06:16.675 sys 0m0.092s 00:06:16.675 ************************************ 00:06:16.675 END TEST thread_poller_perf 00:06:16.675 ************************************ 00:06:16.675 05:52:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.675 05:52:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.675 05:52:32 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:16.675 05:52:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.675 05:52:32 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:16.675 05:52:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.675 05:52:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.675 ************************************ 00:06:16.675 START TEST thread_poller_perf 00:06:16.675 ************************************ 00:06:16.675 05:52:32 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.675 [2024-07-11 05:52:32.461038] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:16.676 [2024-07-11 05:52:32.461195] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62372 ] 00:06:16.934 [2024-07-11 05:52:32.631184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.934 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:16.934 [2024-07-11 05:52:32.795823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.313 ====================================== 00:06:18.313 busy:2203477330 (cyc) 00:06:18.313 total_run_count: 4293000 00:06:18.313 tsc_hz: 2200000000 (cyc) 00:06:18.313 ====================================== 00:06:18.313 poller_cost: 513 (cyc), 233 (nsec) 00:06:18.313 00:06:18.313 real 0m1.711s 00:06:18.313 user 0m1.514s 00:06:18.313 sys 0m0.088s 00:06:18.313 ************************************ 00:06:18.313 END TEST thread_poller_perf 00:06:18.313 ************************************ 00:06:18.313 05:52:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.313 05:52:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.313 05:52:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:18.313 05:52:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:18.313 00:06:18.313 real 0m3.697s 00:06:18.313 user 0m3.178s 00:06:18.313 sys 0m0.293s 00:06:18.313 05:52:34 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.313 05:52:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.313 ************************************ 00:06:18.313 END TEST thread 00:06:18.313 ************************************ 00:06:18.313 05:52:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.313 05:52:34 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:18.313 05:52:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.313 05:52:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.313 05:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:18.313 ************************************ 00:06:18.313 START TEST accel 00:06:18.313 ************************************ 00:06:18.313 05:52:34 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:18.572 * Looking for test storage... 00:06:18.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:18.572 05:52:34 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:18.572 05:52:34 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:18.572 05:52:34 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:18.572 05:52:34 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=62448 00:06:18.572 05:52:34 accel -- accel/accel.sh@63 -- # waitforlisten 62448 00:06:18.572 05:52:34 accel -- common/autotest_common.sh@829 -- # '[' -z 62448 ']' 00:06:18.572 05:52:34 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.572 05:52:34 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.572 05:52:34 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.572 05:52:34 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:18.572 05:52:34 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:18.572 05:52:34 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.572 05:52:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.572 05:52:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.572 05:52:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.572 05:52:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.572 05:52:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.572 05:52:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.572 05:52:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:18.572 05:52:34 accel -- accel/accel.sh@41 -- # jq -r . 00:06:18.572 [2024-07-11 05:52:34.426199] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:18.572 [2024-07-11 05:52:34.426394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62448 ] 00:06:18.831 [2024-07-11 05:52:34.597285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.090 [2024-07-11 05:52:34.823253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.090 [2024-07-11 05:52:34.992295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.657 05:52:35 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.657 05:52:35 accel -- common/autotest_common.sh@862 -- # return 0 00:06:19.657 05:52:35 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:19.657 05:52:35 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:19.658 05:52:35 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:19.658 05:52:35 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:19.658 05:52:35 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:19.658 05:52:35 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:19.658 05:52:35 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:19.658 05:52:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:19.658 05:52:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:19.658 05:52:35 accel -- accel/accel.sh@75 -- # killprocess 62448 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@948 -- # '[' -z 62448 ']' 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@952 -- # kill -0 62448 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@953 -- # uname 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62448 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.658 killing process with pid 62448 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62448' 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@967 -- # kill 62448 00:06:19.658 05:52:35 accel -- common/autotest_common.sh@972 -- # wait 62448 00:06:21.558 05:52:37 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:21.558 05:52:37 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:21.558 05:52:37 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:21.558 05:52:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.558 05:52:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.558 05:52:37 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:21.558 05:52:37 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:21.558 05:52:37 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:21.558 05:52:37 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.558 05:52:37 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.558 05:52:37 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.558 05:52:37 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.558 05:52:37 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.558 05:52:37 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:21.558 05:52:37 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:21.558 05:52:37 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.558 05:52:37 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:21.816 05:52:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.816 05:52:37 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:21.816 05:52:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:21.816 05:52:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.816 05:52:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.816 ************************************ 00:06:21.816 START TEST accel_missing_filename 00:06:21.816 ************************************ 00:06:21.816 05:52:37 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:21.816 05:52:37 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:21.816 05:52:37 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:21.816 05:52:37 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:21.816 05:52:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.816 05:52:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:21.816 05:52:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.816 05:52:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:21.816 05:52:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:21.816 05:52:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:21.816 05:52:37 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.816 05:52:37 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.816 05:52:37 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.816 05:52:37 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.816 05:52:37 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.816 05:52:37 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:21.816 05:52:37 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:21.816 [2024-07-11 05:52:37.569455] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:21.816 [2024-07-11 05:52:37.569634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62522 ] 00:06:22.074 [2024-07-11 05:52:37.740547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.074 [2024-07-11 05:52:37.936886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.332 [2024-07-11 05:52:38.109832] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.898 [2024-07-11 05:52:38.554148] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:23.156 A filename is required. 00:06:23.156 ************************************ 00:06:23.156 END TEST accel_missing_filename 00:06:23.156 ************************************ 00:06:23.156 05:52:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:23.156 05:52:38 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:23.156 05:52:38 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:23.156 05:52:38 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:23.156 05:52:38 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:23.156 05:52:38 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:23.156 00:06:23.156 real 0m1.405s 00:06:23.156 user 0m1.196s 00:06:23.156 sys 0m0.151s 00:06:23.156 05:52:38 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.156 05:52:38 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:23.156 05:52:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.156 05:52:38 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:23.156 05:52:38 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:23.156 05:52:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.156 05:52:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.156 ************************************ 00:06:23.156 START TEST accel_compress_verify 00:06:23.156 ************************************ 00:06:23.156 05:52:38 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:23.156 05:52:38 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:23.156 05:52:38 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:23.156 05:52:38 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:23.156 05:52:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.156 05:52:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:23.156 05:52:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.156 05:52:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:23.156 05:52:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:23.156 05:52:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:23.156 05:52:38 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.156 05:52:38 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.156 05:52:38 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.156 05:52:38 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.156 05:52:38 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.156 05:52:38 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:23.156 05:52:38 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:23.156 [2024-07-11 05:52:39.016618] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:23.156 [2024-07-11 05:52:39.016780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62560 ] 00:06:23.415 [2024-07-11 05:52:39.173968] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.415 [2024-07-11 05:52:39.329748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.673 [2024-07-11 05:52:39.481007] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.240 [2024-07-11 05:52:39.879254] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:24.500 00:06:24.500 Compression does not support the verify option, aborting. 00:06:24.500 05:52:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:24.500 05:52:40 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.500 05:52:40 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:24.500 05:52:40 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:24.500 05:52:40 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:24.500 05:52:40 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.500 00:06:24.500 real 0m1.249s 00:06:24.500 user 0m1.074s 00:06:24.500 sys 0m0.120s 00:06:24.500 05:52:40 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.500 ************************************ 00:06:24.500 END TEST accel_compress_verify 00:06:24.500 ************************************ 00:06:24.500 05:52:40 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:24.500 05:52:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.500 05:52:40 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:24.500 05:52:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:24.500 05:52:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.500 05:52:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.500 ************************************ 00:06:24.500 START TEST accel_wrong_workload 00:06:24.500 ************************************ 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:24.500 05:52:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:24.500 05:52:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:24.500 05:52:40 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.500 05:52:40 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.500 05:52:40 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.500 05:52:40 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.500 05:52:40 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.500 05:52:40 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:24.500 05:52:40 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:24.500 Unsupported workload type: foobar 00:06:24.500 [2024-07-11 05:52:40.328274] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:24.500 accel_perf options: 00:06:24.500 [-h help message] 00:06:24.500 [-q queue depth per core] 00:06:24.500 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:24.500 [-T number of threads per core 00:06:24.500 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:24.500 [-t time in seconds] 00:06:24.500 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:24.500 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:24.500 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:24.500 [-l for compress/decompress workloads, name of uncompressed input file 00:06:24.500 [-S for crc32c workload, use this seed value (default 0) 00:06:24.500 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:24.500 [-f for fill workload, use this BYTE value (default 255) 00:06:24.500 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:24.500 [-y verify result if this switch is on] 00:06:24.500 [-a tasks to allocate per core (default: same value as -q)] 00:06:24.500 Can be used to spread operations across a wider range of memory. 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.500 00:06:24.500 real 0m0.082s 00:06:24.500 user 0m0.083s 00:06:24.500 sys 0m0.042s 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.500 ************************************ 00:06:24.500 END TEST accel_wrong_workload 00:06:24.500 ************************************ 00:06:24.500 05:52:40 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:24.500 05:52:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.500 05:52:40 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:24.500 05:52:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:24.500 05:52:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.500 05:52:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.500 ************************************ 00:06:24.500 START TEST accel_negative_buffers 00:06:24.500 ************************************ 00:06:24.500 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:24.500 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:24.500 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:24.500 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:24.501 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.501 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:24.501 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.501 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:24.501 05:52:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:24.501 05:52:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:24.501 05:52:40 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.501 05:52:40 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.501 05:52:40 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.501 05:52:40 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.501 05:52:40 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.501 05:52:40 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:24.501 05:52:40 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:24.759 -x option must be non-negative. 00:06:24.759 [2024-07-11 05:52:40.464422] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:24.759 accel_perf options: 00:06:24.759 [-h help message] 00:06:24.759 [-q queue depth per core] 00:06:24.759 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:24.759 [-T number of threads per core 00:06:24.759 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:24.759 [-t time in seconds] 00:06:24.759 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:24.759 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:24.759 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:24.759 [-l for compress/decompress workloads, name of uncompressed input file 00:06:24.759 [-S for crc32c workload, use this seed value (default 0) 00:06:24.759 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:24.759 [-f for fill workload, use this BYTE value (default 255) 00:06:24.759 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:24.759 [-y verify result if this switch is on] 00:06:24.759 [-a tasks to allocate per core (default: same value as -q)] 00:06:24.759 Can be used to spread operations across a wider range of memory. 00:06:24.759 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:24.759 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.759 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.759 ************************************ 00:06:24.760 END TEST accel_negative_buffers 00:06:24.760 ************************************ 00:06:24.760 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.760 00:06:24.760 real 0m0.084s 00:06:24.760 user 0m0.094s 00:06:24.760 sys 0m0.040s 00:06:24.760 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.760 05:52:40 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:24.760 05:52:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.760 05:52:40 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:24.760 05:52:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:24.760 05:52:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.760 05:52:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.760 ************************************ 00:06:24.760 START TEST accel_crc32c 00:06:24.760 ************************************ 00:06:24.760 05:52:40 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:24.760 05:52:40 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:24.760 [2024-07-11 05:52:40.599340] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:24.760 [2024-07-11 05:52:40.599509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62627 ] 00:06:25.018 [2024-07-11 05:52:40.768164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.018 [2024-07-11 05:52:40.923273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.275 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.275 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.275 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.275 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.275 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.275 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.275 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.275 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.275 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.276 05:52:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:27.174 05:52:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.174 00:06:27.174 real 0m2.296s 00:06:27.174 user 0m2.052s 00:06:27.174 sys 0m0.147s 00:06:27.174 05:52:42 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.174 05:52:42 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:27.174 ************************************ 00:06:27.174 END TEST accel_crc32c 00:06:27.174 ************************************ 00:06:27.174 05:52:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.174 05:52:42 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:27.174 05:52:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:27.174 05:52:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.174 05:52:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.174 ************************************ 00:06:27.174 START TEST accel_crc32c_C2 00:06:27.174 ************************************ 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.174 05:52:42 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:27.174 [2024-07-11 05:52:42.940883] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:27.174 [2024-07-11 05:52:42.941062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62679 ] 00:06:27.432 [2024-07-11 05:52:43.106316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.432 [2024-07-11 05:52:43.264921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.689 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:27.690 05:52:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.669 00:06:29.669 real 0m2.316s 00:06:29.669 user 0m0.018s 00:06:29.669 sys 0m0.002s 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.669 ************************************ 00:06:29.669 END TEST accel_crc32c_C2 00:06:29.669 ************************************ 00:06:29.669 05:52:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:29.669 05:52:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.669 05:52:45 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:29.669 05:52:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.669 05:52:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.669 05:52:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.669 ************************************ 00:06:29.669 START TEST accel_copy 00:06:29.669 ************************************ 00:06:29.669 05:52:45 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:29.669 05:52:45 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:29.669 [2024-07-11 05:52:45.309616] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:29.669 [2024-07-11 05:52:45.310473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62720 ] 00:06:29.669 [2024-07-11 05:52:45.478964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.929 [2024-07-11 05:52:45.634902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.929 05:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:31.834 05:52:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.834 00:06:31.834 real 0m2.278s 00:06:31.834 user 0m2.034s 00:06:31.834 sys 0m0.148s 00:06:31.834 05:52:47 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.834 05:52:47 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:31.834 ************************************ 00:06:31.834 END TEST accel_copy 00:06:31.834 ************************************ 00:06:31.834 05:52:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.834 05:52:47 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.834 05:52:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:31.834 05:52:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.834 05:52:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.834 ************************************ 00:06:31.834 START TEST accel_fill 00:06:31.834 ************************************ 00:06:31.834 05:52:47 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.834 05:52:47 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:31.834 05:52:47 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:31.834 05:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.834 05:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.834 05:52:47 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.835 05:52:47 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.835 05:52:47 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:31.835 05:52:47 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.835 05:52:47 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.835 05:52:47 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.835 05:52:47 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.835 05:52:47 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.835 05:52:47 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:31.835 05:52:47 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:31.835 [2024-07-11 05:52:47.643175] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:31.835 [2024-07-11 05:52:47.643342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62761 ] 00:06:32.093 [2024-07-11 05:52:47.811413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.093 [2024-07-11 05:52:47.974417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 05:52:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:34.258 05:52:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.258 00:06:34.258 real 0m2.295s 00:06:34.258 user 0m0.014s 00:06:34.258 sys 0m0.002s 00:06:34.258 05:52:49 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.258 ************************************ 00:06:34.258 END TEST accel_fill 00:06:34.258 05:52:49 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:34.258 ************************************ 00:06:34.258 05:52:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.258 05:52:49 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:34.258 05:52:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:34.258 05:52:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.258 05:52:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.258 ************************************ 00:06:34.258 START TEST accel_copy_crc32c 00:06:34.258 ************************************ 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:34.258 05:52:49 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:34.258 [2024-07-11 05:52:49.990317] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:34.258 [2024-07-11 05:52:49.990503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62808 ] 00:06:34.258 [2024-07-11 05:52:50.158677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.517 [2024-07-11 05:52:50.314749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.776 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.777 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.777 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.777 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.777 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.777 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.777 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.777 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.777 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.777 05:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.681 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.682 00:06:36.682 real 0m2.239s 00:06:36.682 user 0m0.011s 00:06:36.682 sys 0m0.005s 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.682 05:52:52 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:36.682 ************************************ 00:06:36.682 END TEST accel_copy_crc32c 00:06:36.682 ************************************ 00:06:36.682 05:52:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.682 05:52:52 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:36.682 05:52:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:36.682 05:52:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.682 05:52:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.682 ************************************ 00:06:36.682 START TEST accel_copy_crc32c_C2 00:06:36.682 ************************************ 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.682 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:36.682 [2024-07-11 05:52:52.280616] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:36.682 [2024-07-11 05:52:52.280880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62849 ] 00:06:36.682 [2024-07-11 05:52:52.474192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.941 [2024-07-11 05:52:52.621840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.941 05:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.846 00:06:38.846 real 0m2.297s 00:06:38.846 user 0m2.027s 00:06:38.846 sys 0m0.175s 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.846 05:52:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:38.846 ************************************ 00:06:38.846 END TEST accel_copy_crc32c_C2 00:06:38.846 ************************************ 00:06:38.846 05:52:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.846 05:52:54 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:38.846 05:52:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:38.846 05:52:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.846 05:52:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.846 ************************************ 00:06:38.846 START TEST accel_dualcast 00:06:38.846 ************************************ 00:06:38.846 05:52:54 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.846 05:52:54 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.847 05:52:54 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.847 05:52:54 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:38.847 05:52:54 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:38.847 [2024-07-11 05:52:54.610932] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:38.847 [2024-07-11 05:52:54.611083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62895 ] 00:06:38.847 [2024-07-11 05:52:54.763400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.105 [2024-07-11 05:52:54.919072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.365 05:52:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:41.270 05:52:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.270 00:06:41.270 real 0m2.233s 00:06:41.270 user 0m2.013s 00:06:41.270 sys 0m0.125s 00:06:41.270 ************************************ 00:06:41.270 END TEST accel_dualcast 00:06:41.270 ************************************ 00:06:41.270 05:52:56 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.270 05:52:56 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:41.270 05:52:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.270 05:52:56 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:41.270 05:52:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.270 05:52:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.270 05:52:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.270 ************************************ 00:06:41.270 START TEST accel_compare 00:06:41.270 ************************************ 00:06:41.270 05:52:56 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:41.270 05:52:56 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:41.270 [2024-07-11 05:52:56.913851] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:41.270 [2024-07-11 05:52:56.914044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62936 ] 00:06:41.270 [2024-07-11 05:52:57.083723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.529 [2024-07-11 05:52:57.241377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.529 05:52:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.530 05:52:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.530 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.530 05:52:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.432 05:52:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.432 05:52:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.432 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.432 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.432 05:52:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.432 05:52:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.432 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.432 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:43.433 05:52:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.433 ************************************ 00:06:43.433 END TEST accel_compare 00:06:43.433 ************************************ 00:06:43.433 00:06:43.433 real 0m2.274s 00:06:43.433 user 0m2.039s 00:06:43.433 sys 0m0.142s 00:06:43.433 05:52:59 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.433 05:52:59 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:43.433 05:52:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.433 05:52:59 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:43.433 05:52:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:43.433 05:52:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.433 05:52:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.433 ************************************ 00:06:43.433 START TEST accel_xor 00:06:43.433 ************************************ 00:06:43.433 05:52:59 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:43.433 05:52:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:43.433 [2024-07-11 05:52:59.254729] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:43.433 [2024-07-11 05:52:59.254974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62983 ] 00:06:43.691 [2024-07-11 05:52:59.423620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.691 [2024-07-11 05:52:59.578538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:43.950 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.951 05:52:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.871 00:06:45.871 real 0m2.300s 00:06:45.871 user 0m2.039s 00:06:45.871 sys 0m0.169s 00:06:45.871 05:53:01 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.871 05:53:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:45.871 ************************************ 00:06:45.871 END TEST accel_xor 00:06:45.871 ************************************ 00:06:45.871 05:53:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.871 05:53:01 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:45.871 05:53:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:45.871 05:53:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.871 05:53:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.871 ************************************ 00:06:45.871 START TEST accel_xor 00:06:45.871 ************************************ 00:06:45.871 05:53:01 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:45.871 05:53:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:45.871 [2024-07-11 05:53:01.585406] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:45.871 [2024-07-11 05:53:01.585600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63024 ] 00:06:45.872 [2024-07-11 05:53:01.755101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.130 [2024-07-11 05:53:01.912639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:46.388 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.389 05:53:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.289 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:48.290 05:53:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.290 00:06:48.290 real 0m2.253s 00:06:48.290 user 0m2.015s 00:06:48.290 sys 0m0.147s 00:06:48.290 05:53:03 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.290 05:53:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:48.290 ************************************ 00:06:48.290 END TEST accel_xor 00:06:48.290 ************************************ 00:06:48.290 05:53:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.290 05:53:03 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:48.290 05:53:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:48.290 05:53:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.290 05:53:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.290 ************************************ 00:06:48.290 START TEST accel_dif_verify 00:06:48.290 ************************************ 00:06:48.290 05:53:03 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:48.290 05:53:03 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:48.290 [2024-07-11 05:53:03.887337] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:48.290 [2024-07-11 05:53:03.887530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63065 ] 00:06:48.290 [2024-07-11 05:53:04.054902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.548 [2024-07-11 05:53:04.213815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.548 05:53:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:50.450 05:53:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.450 00:06:50.450 real 0m2.239s 00:06:50.450 user 0m2.018s 00:06:50.450 sys 0m0.131s 00:06:50.450 05:53:06 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.450 05:53:06 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:50.450 ************************************ 00:06:50.450 END TEST accel_dif_verify 00:06:50.450 ************************************ 00:06:50.450 05:53:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.450 05:53:06 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:50.450 05:53:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:50.450 05:53:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.450 05:53:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.450 ************************************ 00:06:50.450 START TEST accel_dif_generate 00:06:50.450 ************************************ 00:06:50.450 05:53:06 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:50.450 05:53:06 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:50.450 05:53:06 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:50.451 05:53:06 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:50.451 [2024-07-11 05:53:06.182718] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:50.451 [2024-07-11 05:53:06.183012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63111 ] 00:06:50.451 [2024-07-11 05:53:06.347802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.709 [2024-07-11 05:53:06.513760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.968 05:53:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:52.875 05:53:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.875 00:06:52.875 real 0m2.273s 00:06:52.875 user 0m2.041s 00:06:52.875 sys 0m0.139s 00:06:52.875 05:53:08 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.875 05:53:08 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 ************************************ 00:06:52.875 END TEST accel_dif_generate 00:06:52.875 ************************************ 00:06:52.875 05:53:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.875 05:53:08 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:52.875 05:53:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:52.875 05:53:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.875 05:53:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 ************************************ 00:06:52.875 START TEST accel_dif_generate_copy 00:06:52.875 ************************************ 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:52.875 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:52.875 [2024-07-11 05:53:08.500358] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:52.875 [2024-07-11 05:53:08.500546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63158 ] 00:06:52.875 [2024-07-11 05:53:08.671750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.135 [2024-07-11 05:53:08.827324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.135 05:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.135 05:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.135 05:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.135 05:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.135 05:53:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.040 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:55.041 ************************************ 00:06:55.041 END TEST accel_dif_generate_copy 00:06:55.041 ************************************ 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.041 00:06:55.041 real 0m2.267s 00:06:55.041 user 0m2.034s 00:06:55.041 sys 0m0.141s 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.041 05:53:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:55.041 05:53:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.041 05:53:10 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:55.041 05:53:10 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.041 05:53:10 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:55.041 05:53:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.041 05:53:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.041 ************************************ 00:06:55.041 START TEST accel_comp 00:06:55.041 ************************************ 00:06:55.041 05:53:10 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:55.041 05:53:10 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:55.041 [2024-07-11 05:53:10.824138] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:55.041 [2024-07-11 05:53:10.824305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63199 ] 00:06:55.300 [2024-07-11 05:53:10.994073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.300 [2024-07-11 05:53:11.142938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.559 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.560 05:53:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:57.466 05:53:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.466 00:06:57.466 real 0m2.261s 00:06:57.466 user 0m2.027s 00:06:57.466 sys 0m0.139s 00:06:57.466 05:53:13 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.466 ************************************ 00:06:57.466 END TEST accel_comp 00:06:57.466 ************************************ 00:06:57.466 05:53:13 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:57.466 05:53:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.466 05:53:13 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:57.466 05:53:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:57.466 05:53:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.466 05:53:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.466 ************************************ 00:06:57.466 START TEST accel_decomp 00:06:57.466 ************************************ 00:06:57.466 05:53:13 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:57.466 05:53:13 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:57.466 05:53:13 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:57.466 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.466 05:53:13 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:57.466 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.466 05:53:13 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:57.466 05:53:13 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:57.467 05:53:13 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.467 05:53:13 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.467 05:53:13 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.467 05:53:13 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.467 05:53:13 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.467 05:53:13 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:57.467 05:53:13 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:57.467 [2024-07-11 05:53:13.139171] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:57.467 [2024-07-11 05:53:13.139335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63240 ] 00:06:57.467 [2024-07-11 05:53:13.306708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.726 [2024-07-11 05:53:13.475563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.726 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.727 05:53:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.631 05:53:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.631 00:06:59.631 real 0m2.272s 00:06:59.631 user 0m2.031s 00:06:59.631 sys 0m0.148s 00:06:59.631 05:53:15 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.631 ************************************ 00:06:59.631 END TEST accel_decomp 00:06:59.631 ************************************ 00:06:59.631 05:53:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:59.631 05:53:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.631 05:53:15 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:59.631 05:53:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:59.631 05:53:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.631 05:53:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.631 ************************************ 00:06:59.631 START TEST accel_decomp_full 00:06:59.631 ************************************ 00:06:59.631 05:53:15 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:59.631 05:53:15 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:59.631 [2024-07-11 05:53:15.460438] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:06:59.631 [2024-07-11 05:53:15.460596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63281 ] 00:06:59.891 [2024-07-11 05:53:15.608532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.891 [2024-07-11 05:53:15.766256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.167 05:53:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.076 05:53:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.076 00:07:02.076 real 0m2.234s 00:07:02.076 user 0m2.011s 00:07:02.076 sys 0m0.128s 00:07:02.076 05:53:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.076 05:53:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:02.076 ************************************ 00:07:02.076 END TEST accel_decomp_full 00:07:02.076 ************************************ 00:07:02.076 05:53:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.076 05:53:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.076 05:53:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:02.076 05:53:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.076 05:53:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.076 ************************************ 00:07:02.076 START TEST accel_decomp_mcore 00:07:02.076 ************************************ 00:07:02.076 05:53:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:02.077 05:53:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:02.077 [2024-07-11 05:53:17.757733] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:02.077 [2024-07-11 05:53:17.757931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63328 ] 00:07:02.077 [2024-07-11 05:53:17.928590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.336 [2024-07-11 05:53:18.107286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.336 [2024-07-11 05:53:18.107423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.336 [2024-07-11 05:53:18.107523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.336 [2024-07-11 05:53:18.107720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.608 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.609 05:53:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.524 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.524 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.524 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.525 00:07:04.525 real 0m2.408s 00:07:04.525 user 0m6.994s 00:07:04.525 sys 0m0.177s 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.525 05:53:20 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:04.525 ************************************ 00:07:04.525 END TEST accel_decomp_mcore 00:07:04.525 ************************************ 00:07:04.525 05:53:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.525 05:53:20 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.525 05:53:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:04.525 05:53:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.525 05:53:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.525 ************************************ 00:07:04.525 START TEST accel_decomp_full_mcore 00:07:04.525 ************************************ 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:04.525 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:04.525 [2024-07-11 05:53:20.204004] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:04.525 [2024-07-11 05:53:20.204738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63377 ] 00:07:04.525 [2024-07-11 05:53:20.359198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.784 [2024-07-11 05:53:20.535785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.784 [2024-07-11 05:53:20.535887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.784 [2024-07-11 05:53:20.535976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.784 [2024-07-11 05:53:20.536230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:05.044 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.045 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.045 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.045 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.045 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.045 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.045 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.045 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.045 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.045 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.045 05:53:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.950 ************************************ 00:07:06.950 END TEST accel_decomp_full_mcore 00:07:06.950 ************************************ 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.950 00:07:06.950 real 0m2.386s 00:07:06.950 user 0m7.066s 00:07:06.950 sys 0m0.159s 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.950 05:53:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:06.950 05:53:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.950 05:53:22 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.950 05:53:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:06.950 05:53:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.950 05:53:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.950 ************************************ 00:07:06.950 START TEST accel_decomp_mthread 00:07:06.950 ************************************ 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:06.950 05:53:22 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:06.950 [2024-07-11 05:53:22.642333] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:06.950 [2024-07-11 05:53:22.642477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63427 ] 00:07:06.950 [2024-07-11 05:53:22.800692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.209 [2024-07-11 05:53:22.962512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.467 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.468 05:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.371 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.372 00:07:09.372 real 0m2.342s 00:07:09.372 user 0m2.118s 00:07:09.372 sys 0m0.131s 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.372 05:53:24 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:09.372 ************************************ 00:07:09.372 END TEST accel_decomp_mthread 00:07:09.372 ************************************ 00:07:09.372 05:53:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.372 05:53:24 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.372 05:53:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:09.372 05:53:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.372 05:53:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.372 ************************************ 00:07:09.372 START TEST accel_decomp_full_mthread 00:07:09.372 ************************************ 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:09.372 05:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:09.372 [2024-07-11 05:53:25.044008] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:09.372 [2024-07-11 05:53:25.044204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63468 ] 00:07:09.372 [2024-07-11 05:53:25.214263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.630 [2024-07-11 05:53:25.395680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.889 05:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.789 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.790 00:07:11.790 real 0m2.409s 00:07:11.790 user 0m2.172s 00:07:11.790 sys 0m0.137s 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.790 05:53:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:11.790 ************************************ 00:07:11.790 END TEST accel_decomp_full_mthread 00:07:11.790 ************************************ 00:07:11.790 05:53:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.790 05:53:27 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:11.790 05:53:27 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.790 05:53:27 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:11.790 05:53:27 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:11.790 05:53:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.790 05:53:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.790 05:53:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.790 05:53:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.790 05:53:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.790 05:53:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.790 05:53:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.790 05:53:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:11.790 05:53:27 accel -- accel/accel.sh@41 -- # jq -r . 00:07:11.790 ************************************ 00:07:11.790 START TEST accel_dif_functional_tests 00:07:11.790 ************************************ 00:07:11.790 05:53:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.790 [2024-07-11 05:53:27.554017] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:11.790 [2024-07-11 05:53:27.554187] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63515 ] 00:07:12.048 [2024-07-11 05:53:27.724940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.048 [2024-07-11 05:53:27.885727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.048 [2024-07-11 05:53:27.885838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.048 [2024-07-11 05:53:27.885850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.306 [2024-07-11 05:53:28.049117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.306 00:07:12.306 00:07:12.306 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.306 http://cunit.sourceforge.net/ 00:07:12.306 00:07:12.306 00:07:12.306 Suite: accel_dif 00:07:12.306 Test: verify: DIF generated, GUARD check ...passed 00:07:12.306 Test: verify: DIF generated, APPTAG check ...passed 00:07:12.306 Test: verify: DIF generated, REFTAG check ...passed 00:07:12.306 Test: verify: DIF not generated, GUARD check ...[2024-07-11 05:53:28.134837] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:12.306 passed 00:07:12.306 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 05:53:28.135153] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:12.306 passed 00:07:12.306 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 05:53:28.135341] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:12.306 passed 00:07:12.306 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:12.306 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-11 05:53:28.135635] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:12.306 passed 00:07:12.306 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:12.306 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:12.306 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:12.306 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-11 05:53:28.136195] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:12.306 passed 00:07:12.306 Test: verify copy: DIF generated, GUARD check ...passed 00:07:12.306 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:12.306 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:12.306 Test: verify copy: DIF not generated, GUARD check ...[2024-07-11 05:53:28.136918] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:12.306 passed 00:07:12.306 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-11 05:53:28.137103] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:12.306 passed 00:07:12.307 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-11 05:53:28.137290] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:12.307 passed 00:07:12.307 Test: generate copy: DIF generated, GUARD check ...passed 00:07:12.307 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:12.307 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:12.307 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:12.307 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:12.307 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:12.307 Test: generate copy: iovecs-len validate ...[2024-07-11 05:53:28.138223] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:12.307 passed 00:07:12.307 Test: generate copy: buffer alignment validate ...passed 00:07:12.307 00:07:12.307 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.307 suites 1 1 n/a 0 0 00:07:12.307 tests 26 26 26 0 0 00:07:12.307 asserts 115 115 115 0 n/a 00:07:12.307 00:07:12.307 Elapsed time = 0.007 seconds 00:07:13.243 00:07:13.243 real 0m1.703s 00:07:13.243 user 0m3.221s 00:07:13.243 sys 0m0.199s 00:07:13.243 05:53:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.243 05:53:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:13.243 ************************************ 00:07:13.243 END TEST accel_dif_functional_tests 00:07:13.243 ************************************ 00:07:13.501 05:53:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.501 ************************************ 00:07:13.501 END TEST accel 00:07:13.501 ************************************ 00:07:13.501 00:07:13.501 real 0m54.976s 00:07:13.501 user 1m0.234s 00:07:13.501 sys 0m4.697s 00:07:13.501 05:53:29 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.501 05:53:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.501 05:53:29 -- common/autotest_common.sh@1142 -- # return 0 00:07:13.501 05:53:29 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:13.501 05:53:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.501 05:53:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.501 05:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.501 ************************************ 00:07:13.501 START TEST accel_rpc 00:07:13.501 ************************************ 00:07:13.501 05:53:29 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:13.501 * Looking for test storage... 00:07:13.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:13.501 05:53:29 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.501 05:53:29 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63592 00:07:13.501 05:53:29 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:13.501 05:53:29 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63592 00:07:13.501 05:53:29 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 63592 ']' 00:07:13.501 05:53:29 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.501 05:53:29 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.501 05:53:29 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.501 05:53:29 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.501 05:53:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.502 [2024-07-11 05:53:29.419529] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:13.502 [2024-07-11 05:53:29.419931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63592 ] 00:07:13.760 [2024-07-11 05:53:29.579648] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.019 [2024-07-11 05:53:29.759497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.586 05:53:30 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.586 05:53:30 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:14.586 05:53:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:14.586 05:53:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:14.586 05:53:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:14.586 05:53:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:14.586 05:53:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:14.586 05:53:30 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.586 05:53:30 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.586 05:53:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.586 ************************************ 00:07:14.586 START TEST accel_assign_opcode 00:07:14.586 ************************************ 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.586 [2024-07-11 05:53:30.268692] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.586 [2024-07-11 05:53:30.276692] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.586 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.586 [2024-07-11 05:53:30.429248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.154 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.154 05:53:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:15.154 05:53:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:15.154 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.154 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.154 05:53:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:15.154 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.154 software 00:07:15.154 00:07:15.154 real 0m0.640s 00:07:15.154 user 0m0.053s 00:07:15.154 sys 0m0.011s 00:07:15.154 ************************************ 00:07:15.154 END TEST accel_assign_opcode 00:07:15.154 ************************************ 00:07:15.154 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.154 05:53:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:15.154 05:53:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63592 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 63592 ']' 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 63592 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63592 00:07:15.154 killing process with pid 63592 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63592' 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@967 -- # kill 63592 00:07:15.154 05:53:30 accel_rpc -- common/autotest_common.sh@972 -- # wait 63592 00:07:17.059 00:07:17.059 real 0m3.448s 00:07:17.059 user 0m3.467s 00:07:17.059 sys 0m0.403s 00:07:17.059 05:53:32 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.059 ************************************ 00:07:17.059 END TEST accel_rpc 00:07:17.059 05:53:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 ************************************ 00:07:17.059 05:53:32 -- common/autotest_common.sh@1142 -- # return 0 00:07:17.059 05:53:32 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:17.059 05:53:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.059 05:53:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.059 05:53:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 ************************************ 00:07:17.059 START TEST app_cmdline 00:07:17.059 ************************************ 00:07:17.059 05:53:32 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:17.059 * Looking for test storage... 00:07:17.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:17.059 05:53:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:17.059 05:53:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63708 00:07:17.059 05:53:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:17.059 05:53:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63708 00:07:17.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.059 05:53:32 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 63708 ']' 00:07:17.059 05:53:32 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.059 05:53:32 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.060 05:53:32 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.060 05:53:32 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.060 05:53:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.060 [2024-07-11 05:53:32.917897] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:17.060 [2024-07-11 05:53:32.918292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63708 ] 00:07:17.319 [2024-07-11 05:53:33.070789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.319 [2024-07-11 05:53:33.221711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.578 [2024-07-11 05:53:33.393585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.147 05:53:33 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.147 05:53:33 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:18.147 05:53:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:18.407 { 00:07:18.407 "version": "SPDK v24.09-pre git sha1 9937c0160", 00:07:18.407 "fields": { 00:07:18.407 "major": 24, 00:07:18.407 "minor": 9, 00:07:18.407 "patch": 0, 00:07:18.407 "suffix": "-pre", 00:07:18.407 "commit": "9937c0160" 00:07:18.407 } 00:07:18.407 } 00:07:18.407 05:53:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:18.407 05:53:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:18.407 05:53:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:18.407 05:53:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:18.407 05:53:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:18.407 05:53:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:18.407 05:53:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.407 05:53:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:18.407 05:53:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:18.407 05:53:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:18.407 05:53:34 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.679 request: 00:07:18.679 { 00:07:18.679 "method": "env_dpdk_get_mem_stats", 00:07:18.679 "req_id": 1 00:07:18.679 } 00:07:18.679 Got JSON-RPC error response 00:07:18.679 response: 00:07:18.679 { 00:07:18.679 "code": -32601, 00:07:18.679 "message": "Method not found" 00:07:18.679 } 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.679 05:53:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63708 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 63708 ']' 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 63708 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63708 00:07:18.679 killing process with pid 63708 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63708' 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@967 -- # kill 63708 00:07:18.679 05:53:34 app_cmdline -- common/autotest_common.sh@972 -- # wait 63708 00:07:20.584 00:07:20.584 real 0m3.466s 00:07:20.584 user 0m4.002s 00:07:20.584 sys 0m0.436s 00:07:20.584 ************************************ 00:07:20.584 END TEST app_cmdline 00:07:20.584 ************************************ 00:07:20.584 05:53:36 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.584 05:53:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:20.584 05:53:36 -- common/autotest_common.sh@1142 -- # return 0 00:07:20.584 05:53:36 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:20.584 05:53:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.584 05:53:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.584 05:53:36 -- common/autotest_common.sh@10 -- # set +x 00:07:20.584 ************************************ 00:07:20.584 START TEST version 00:07:20.584 ************************************ 00:07:20.584 05:53:36 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:20.584 * Looking for test storage... 00:07:20.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:20.584 05:53:36 version -- app/version.sh@17 -- # get_header_version major 00:07:20.584 05:53:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:20.584 05:53:36 version -- app/version.sh@14 -- # cut -f2 00:07:20.584 05:53:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:20.584 05:53:36 version -- app/version.sh@17 -- # major=24 00:07:20.584 05:53:36 version -- app/version.sh@18 -- # get_header_version minor 00:07:20.584 05:53:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:20.584 05:53:36 version -- app/version.sh@14 -- # cut -f2 00:07:20.584 05:53:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:20.584 05:53:36 version -- app/version.sh@18 -- # minor=9 00:07:20.584 05:53:36 version -- app/version.sh@19 -- # get_header_version patch 00:07:20.584 05:53:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:20.584 05:53:36 version -- app/version.sh@14 -- # cut -f2 00:07:20.584 05:53:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:20.584 05:53:36 version -- app/version.sh@19 -- # patch=0 00:07:20.584 05:53:36 version -- app/version.sh@20 -- # get_header_version suffix 00:07:20.584 05:53:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:20.584 05:53:36 version -- app/version.sh@14 -- # cut -f2 00:07:20.584 05:53:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:20.584 05:53:36 version -- app/version.sh@20 -- # suffix=-pre 00:07:20.584 05:53:36 version -- app/version.sh@22 -- # version=24.9 00:07:20.584 05:53:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:20.584 05:53:36 version -- app/version.sh@28 -- # version=24.9rc0 00:07:20.584 05:53:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:20.584 05:53:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:20.584 05:53:36 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:20.584 05:53:36 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:20.584 00:07:20.584 real 0m0.153s 00:07:20.584 user 0m0.096s 00:07:20.584 sys 0m0.090s 00:07:20.584 ************************************ 00:07:20.584 END TEST version 00:07:20.584 ************************************ 00:07:20.584 05:53:36 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.584 05:53:36 version -- common/autotest_common.sh@10 -- # set +x 00:07:20.584 05:53:36 -- common/autotest_common.sh@1142 -- # return 0 00:07:20.584 05:53:36 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:20.584 05:53:36 -- spdk/autotest.sh@198 -- # uname -s 00:07:20.584 05:53:36 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:20.584 05:53:36 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:20.584 05:53:36 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:07:20.584 05:53:36 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:07:20.584 05:53:36 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:20.584 05:53:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.584 05:53:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.584 05:53:36 -- common/autotest_common.sh@10 -- # set +x 00:07:20.584 ************************************ 00:07:20.584 START TEST spdk_dd 00:07:20.584 ************************************ 00:07:20.584 05:53:36 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:20.843 * Looking for test storage... 00:07:20.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:20.843 05:53:36 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.843 05:53:36 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.843 05:53:36 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.843 05:53:36 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.843 05:53:36 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.843 05:53:36 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.844 05:53:36 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.844 05:53:36 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:20.844 05:53:36 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.844 05:53:36 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:21.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:21.103 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:21.103 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:21.103 05:53:36 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:21.103 05:53:36 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@230 -- # local class 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@232 -- # local progif 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@233 -- # class=01 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:07:21.103 05:53:36 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@15 -- # local i 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@24 -- # return 0 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@15 -- # local i 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@24 -- # return 0 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:07:21.104 05:53:36 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:21.104 05:53:36 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@139 -- # local lib so 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:36 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:21.104 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.105 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:21.365 * spdk_dd linked to liburing 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:21.365 05:53:37 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:07:21.365 05:53:37 spdk_dd -- dd/common.sh@157 -- # return 0 00:07:21.365 05:53:37 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:21.365 05:53:37 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:21.366 05:53:37 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:21.366 05:53:37 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.366 05:53:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:21.366 ************************************ 00:07:21.366 START TEST spdk_dd_basic_rw 00:07:21.366 ************************************ 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:21.366 * Looking for test storage... 00:07:21.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:21.366 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:21.627 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:21.627 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.628 ************************************ 00:07:21.628 START TEST dd_bs_lt_native_bs 00:07:21.628 ************************************ 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.628 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.629 05:53:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:21.629 { 00:07:21.629 "subsystems": [ 00:07:21.629 { 00:07:21.629 "subsystem": "bdev", 00:07:21.629 "config": [ 00:07:21.629 { 00:07:21.629 "params": { 00:07:21.629 "trtype": "pcie", 00:07:21.629 "traddr": "0000:00:10.0", 00:07:21.629 "name": "Nvme0" 00:07:21.629 }, 00:07:21.629 "method": "bdev_nvme_attach_controller" 00:07:21.629 }, 00:07:21.629 { 00:07:21.629 "method": "bdev_wait_for_examine" 00:07:21.629 } 00:07:21.629 ] 00:07:21.629 } 00:07:21.629 ] 00:07:21.629 } 00:07:21.888 [2024-07-11 05:53:37.552867] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:21.888 [2024-07-11 05:53:37.553313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64041 ] 00:07:21.888 [2024-07-11 05:53:37.727057] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.147 [2024-07-11 05:53:37.955006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.406 [2024-07-11 05:53:38.114100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.406 [2024-07-11 05:53:38.264256] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:22.406 [2024-07-11 05:53:38.264357] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.974 [2024-07-11 05:53:38.667890] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:23.233 ************************************ 00:07:23.233 END TEST dd_bs_lt_native_bs 00:07:23.233 ************************************ 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.233 00:07:23.233 real 0m1.587s 00:07:23.233 user 0m1.339s 00:07:23.233 sys 0m0.197s 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:23.233 ************************************ 00:07:23.233 START TEST dd_rw 00:07:23.233 ************************************ 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:23.233 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:23.800 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:23.800 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:23.800 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:23.800 05:53:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.058 { 00:07:24.058 "subsystems": [ 00:07:24.058 { 00:07:24.058 "subsystem": "bdev", 00:07:24.058 "config": [ 00:07:24.058 { 00:07:24.058 "params": { 00:07:24.058 "trtype": "pcie", 00:07:24.058 "traddr": "0000:00:10.0", 00:07:24.058 "name": "Nvme0" 00:07:24.058 }, 00:07:24.058 "method": "bdev_nvme_attach_controller" 00:07:24.058 }, 00:07:24.058 { 00:07:24.058 "method": "bdev_wait_for_examine" 00:07:24.058 } 00:07:24.058 ] 00:07:24.058 } 00:07:24.058 ] 00:07:24.058 } 00:07:24.058 [2024-07-11 05:53:39.775820] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:24.058 [2024-07-11 05:53:39.776852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64084 ] 00:07:24.058 [2024-07-11 05:53:39.941247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.317 [2024-07-11 05:53:40.104112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.576 [2024-07-11 05:53:40.260713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.513  Copying: 60/60 [kB] (average 19 MBps) 00:07:25.513 00:07:25.513 05:53:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:25.513 05:53:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:25.513 05:53:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:25.513 05:53:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.513 { 00:07:25.513 "subsystems": [ 00:07:25.513 { 00:07:25.513 "subsystem": "bdev", 00:07:25.513 "config": [ 00:07:25.513 { 00:07:25.513 "params": { 00:07:25.513 "trtype": "pcie", 00:07:25.513 "traddr": "0000:00:10.0", 00:07:25.513 "name": "Nvme0" 00:07:25.513 }, 00:07:25.513 "method": "bdev_nvme_attach_controller" 00:07:25.513 }, 00:07:25.513 { 00:07:25.513 "method": "bdev_wait_for_examine" 00:07:25.513 } 00:07:25.513 ] 00:07:25.513 } 00:07:25.513 ] 00:07:25.513 } 00:07:25.772 [2024-07-11 05:53:41.448512] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:25.772 [2024-07-11 05:53:41.448644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64115 ] 00:07:25.772 [2024-07-11 05:53:41.602494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.031 [2024-07-11 05:53:41.763676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.031 [2024-07-11 05:53:41.920102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.225  Copying: 60/60 [kB] (average 14 MBps) 00:07:27.226 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:27.226 05:53:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.226 { 00:07:27.226 "subsystems": [ 00:07:27.226 { 00:07:27.226 "subsystem": "bdev", 00:07:27.226 "config": [ 00:07:27.226 { 00:07:27.226 "params": { 00:07:27.226 "trtype": "pcie", 00:07:27.226 "traddr": "0000:00:10.0", 00:07:27.226 "name": "Nvme0" 00:07:27.226 }, 00:07:27.226 "method": "bdev_nvme_attach_controller" 00:07:27.226 }, 00:07:27.226 { 00:07:27.226 "method": "bdev_wait_for_examine" 00:07:27.226 } 00:07:27.226 ] 00:07:27.226 } 00:07:27.226 ] 00:07:27.226 } 00:07:27.226 [2024-07-11 05:53:42.950618] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:27.226 [2024-07-11 05:53:42.950836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64137 ] 00:07:27.226 [2024-07-11 05:53:43.106081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.483 [2024-07-11 05:53:43.253540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.742 [2024-07-11 05:53:43.414112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.676  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:28.676 00:07:28.676 05:53:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:28.676 05:53:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:28.676 05:53:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:28.676 05:53:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:28.676 05:53:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:28.676 05:53:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:28.676 05:53:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.242 05:53:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:29.242 05:53:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:29.242 05:53:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:29.242 05:53:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.242 { 00:07:29.242 "subsystems": [ 00:07:29.242 { 00:07:29.242 "subsystem": "bdev", 00:07:29.242 "config": [ 00:07:29.242 { 00:07:29.242 "params": { 00:07:29.242 "trtype": "pcie", 00:07:29.242 "traddr": "0000:00:10.0", 00:07:29.242 "name": "Nvme0" 00:07:29.242 }, 00:07:29.242 "method": "bdev_nvme_attach_controller" 00:07:29.242 }, 00:07:29.242 { 00:07:29.242 "method": "bdev_wait_for_examine" 00:07:29.242 } 00:07:29.242 ] 00:07:29.242 } 00:07:29.242 ] 00:07:29.242 } 00:07:29.500 [2024-07-11 05:53:45.201777] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:29.500 [2024-07-11 05:53:45.201940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64174 ] 00:07:29.500 [2024-07-11 05:53:45.370602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.758 [2024-07-11 05:53:45.535617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.016 [2024-07-11 05:53:45.687886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.952  Copying: 60/60 [kB] (average 58 MBps) 00:07:30.952 00:07:30.952 05:53:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:30.952 05:53:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:30.952 05:53:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.952 05:53:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.952 { 00:07:30.952 "subsystems": [ 00:07:30.952 { 00:07:30.952 "subsystem": "bdev", 00:07:30.952 "config": [ 00:07:30.952 { 00:07:30.952 "params": { 00:07:30.952 "trtype": "pcie", 00:07:30.952 "traddr": "0000:00:10.0", 00:07:30.952 "name": "Nvme0" 00:07:30.952 }, 00:07:30.952 "method": "bdev_nvme_attach_controller" 00:07:30.952 }, 00:07:30.952 { 00:07:30.952 "method": "bdev_wait_for_examine" 00:07:30.952 } 00:07:30.952 ] 00:07:30.952 } 00:07:30.952 ] 00:07:30.952 } 00:07:30.952 [2024-07-11 05:53:46.752924] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:30.952 [2024-07-11 05:53:46.753117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64199 ] 00:07:31.211 [2024-07-11 05:53:46.918136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.211 [2024-07-11 05:53:47.077584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.469 [2024-07-11 05:53:47.223002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.405  Copying: 60/60 [kB] (average 58 MBps) 00:07:32.405 00:07:32.405 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.664 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:32.664 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:32.664 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:32.664 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:32.664 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:32.664 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:32.664 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:32.664 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:32.664 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.664 05:53:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.664 { 00:07:32.664 "subsystems": [ 00:07:32.664 { 00:07:32.664 "subsystem": "bdev", 00:07:32.664 "config": [ 00:07:32.664 { 00:07:32.664 "params": { 00:07:32.664 "trtype": "pcie", 00:07:32.664 "traddr": "0000:00:10.0", 00:07:32.664 "name": "Nvme0" 00:07:32.664 }, 00:07:32.664 "method": "bdev_nvme_attach_controller" 00:07:32.664 }, 00:07:32.664 { 00:07:32.664 "method": "bdev_wait_for_examine" 00:07:32.664 } 00:07:32.664 ] 00:07:32.664 } 00:07:32.664 ] 00:07:32.664 } 00:07:32.664 [2024-07-11 05:53:48.438131] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:32.664 [2024-07-11 05:53:48.438331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64227 ] 00:07:32.922 [2024-07-11 05:53:48.611241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.922 [2024-07-11 05:53:48.784852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.181 [2024-07-11 05:53:48.930106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:34.192  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:34.192 00:07:34.192 05:53:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:34.192 05:53:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:34.192 05:53:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:34.192 05:53:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:34.192 05:53:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:34.192 05:53:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:34.192 05:53:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:34.192 05:53:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.760 05:53:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:34.760 05:53:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:34.760 05:53:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.760 05:53:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.760 { 00:07:34.760 "subsystems": [ 00:07:34.760 { 00:07:34.760 "subsystem": "bdev", 00:07:34.760 "config": [ 00:07:34.760 { 00:07:34.760 "params": { 00:07:34.760 "trtype": "pcie", 00:07:34.760 "traddr": "0000:00:10.0", 00:07:34.760 "name": "Nvme0" 00:07:34.760 }, 00:07:34.760 "method": "bdev_nvme_attach_controller" 00:07:34.760 }, 00:07:34.760 { 00:07:34.760 "method": "bdev_wait_for_examine" 00:07:34.760 } 00:07:34.760 ] 00:07:34.760 } 00:07:34.760 ] 00:07:34.760 } 00:07:34.760 [2024-07-11 05:53:50.517761] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:34.760 [2024-07-11 05:53:50.517922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64258 ] 00:07:35.019 [2024-07-11 05:53:50.687623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.019 [2024-07-11 05:53:50.854152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.278 [2024-07-11 05:53:51.021590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.215  Copying: 56/56 [kB] (average 54 MBps) 00:07:36.215 00:07:36.215 05:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:36.215 05:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:36.215 05:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.215 05:53:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.474 { 00:07:36.474 "subsystems": [ 00:07:36.474 { 00:07:36.474 "subsystem": "bdev", 00:07:36.474 "config": [ 00:07:36.474 { 00:07:36.474 "params": { 00:07:36.474 "trtype": "pcie", 00:07:36.474 "traddr": "0000:00:10.0", 00:07:36.474 "name": "Nvme0" 00:07:36.474 }, 00:07:36.474 "method": "bdev_nvme_attach_controller" 00:07:36.474 }, 00:07:36.474 { 00:07:36.474 "method": "bdev_wait_for_examine" 00:07:36.474 } 00:07:36.474 ] 00:07:36.474 } 00:07:36.474 ] 00:07:36.474 } 00:07:36.474 [2024-07-11 05:53:52.208123] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:36.474 [2024-07-11 05:53:52.208247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64283 ] 00:07:36.474 [2024-07-11 05:53:52.353999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.732 [2024-07-11 05:53:52.508339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.732 [2024-07-11 05:53:52.652889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.930  Copying: 56/56 [kB] (average 27 MBps) 00:07:37.930 00:07:37.930 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.931 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:37.931 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:37.931 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:37.931 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:37.931 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:37.931 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:37.931 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:37.931 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:37.931 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.931 05:53:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.931 { 00:07:37.931 "subsystems": [ 00:07:37.931 { 00:07:37.931 "subsystem": "bdev", 00:07:37.931 "config": [ 00:07:37.931 { 00:07:37.931 "params": { 00:07:37.931 "trtype": "pcie", 00:07:37.931 "traddr": "0000:00:10.0", 00:07:37.931 "name": "Nvme0" 00:07:37.931 }, 00:07:37.931 "method": "bdev_nvme_attach_controller" 00:07:37.931 }, 00:07:37.931 { 00:07:37.931 "method": "bdev_wait_for_examine" 00:07:37.931 } 00:07:37.931 ] 00:07:37.931 } 00:07:37.931 ] 00:07:37.931 } 00:07:37.931 [2024-07-11 05:53:53.723963] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:37.931 [2024-07-11 05:53:53.724181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64311 ] 00:07:38.188 [2024-07-11 05:53:53.889011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.189 [2024-07-11 05:53:54.074049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.446 [2024-07-11 05:53:54.231608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.640  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:39.640 00:07:39.640 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:39.640 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:39.640 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:39.640 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:39.640 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:39.640 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:39.640 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.209 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:40.209 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:40.209 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.209 05:53:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.209 { 00:07:40.209 "subsystems": [ 00:07:40.209 { 00:07:40.209 "subsystem": "bdev", 00:07:40.209 "config": [ 00:07:40.209 { 00:07:40.209 "params": { 00:07:40.209 "trtype": "pcie", 00:07:40.209 "traddr": "0000:00:10.0", 00:07:40.209 "name": "Nvme0" 00:07:40.209 }, 00:07:40.209 "method": "bdev_nvme_attach_controller" 00:07:40.209 }, 00:07:40.209 { 00:07:40.209 "method": "bdev_wait_for_examine" 00:07:40.209 } 00:07:40.209 ] 00:07:40.209 } 00:07:40.209 ] 00:07:40.209 } 00:07:40.209 [2024-07-11 05:53:55.955927] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:40.209 [2024-07-11 05:53:55.956329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64342 ] 00:07:40.209 [2024-07-11 05:53:56.110617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.468 [2024-07-11 05:53:56.277596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.727 [2024-07-11 05:53:56.428725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.664  Copying: 56/56 [kB] (average 54 MBps) 00:07:41.664 00:07:41.664 05:53:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:41.664 05:53:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:41.664 05:53:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.664 05:53:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.664 { 00:07:41.664 "subsystems": [ 00:07:41.664 { 00:07:41.664 "subsystem": "bdev", 00:07:41.664 "config": [ 00:07:41.664 { 00:07:41.664 "params": { 00:07:41.664 "trtype": "pcie", 00:07:41.664 "traddr": "0000:00:10.0", 00:07:41.664 "name": "Nvme0" 00:07:41.664 }, 00:07:41.664 "method": "bdev_nvme_attach_controller" 00:07:41.664 }, 00:07:41.664 { 00:07:41.664 "method": "bdev_wait_for_examine" 00:07:41.664 } 00:07:41.664 ] 00:07:41.664 } 00:07:41.664 ] 00:07:41.664 } 00:07:41.664 [2024-07-11 05:53:57.450493] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:41.664 [2024-07-11 05:53:57.450811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64373 ] 00:07:41.924 [2024-07-11 05:53:57.606408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.924 [2024-07-11 05:53:57.758420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.184 [2024-07-11 05:53:57.919452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.121  Copying: 56/56 [kB] (average 54 MBps) 00:07:43.121 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.121 05:53:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.381 { 00:07:43.381 "subsystems": [ 00:07:43.381 { 00:07:43.381 "subsystem": "bdev", 00:07:43.381 "config": [ 00:07:43.381 { 00:07:43.381 "params": { 00:07:43.381 "trtype": "pcie", 00:07:43.381 "traddr": "0000:00:10.0", 00:07:43.381 "name": "Nvme0" 00:07:43.381 }, 00:07:43.381 "method": "bdev_nvme_attach_controller" 00:07:43.381 }, 00:07:43.381 { 00:07:43.381 "method": "bdev_wait_for_examine" 00:07:43.381 } 00:07:43.381 ] 00:07:43.381 } 00:07:43.381 ] 00:07:43.381 } 00:07:43.381 [2024-07-11 05:53:59.138784] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:43.381 [2024-07-11 05:53:59.138962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64395 ] 00:07:43.641 [2024-07-11 05:53:59.309343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.641 [2024-07-11 05:53:59.471901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.900 [2024-07-11 05:53:59.617358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.837  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:44.837 00:07:44.837 05:54:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:44.838 05:54:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:44.838 05:54:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:44.838 05:54:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:44.838 05:54:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:44.838 05:54:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:44.838 05:54:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:44.838 05:54:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.404 05:54:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:45.404 05:54:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:45.404 05:54:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.404 05:54:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.404 { 00:07:45.404 "subsystems": [ 00:07:45.404 { 00:07:45.404 "subsystem": "bdev", 00:07:45.404 "config": [ 00:07:45.404 { 00:07:45.404 "params": { 00:07:45.404 "trtype": "pcie", 00:07:45.404 "traddr": "0000:00:10.0", 00:07:45.404 "name": "Nvme0" 00:07:45.404 }, 00:07:45.404 "method": "bdev_nvme_attach_controller" 00:07:45.404 }, 00:07:45.404 { 00:07:45.404 "method": "bdev_wait_for_examine" 00:07:45.404 } 00:07:45.404 ] 00:07:45.404 } 00:07:45.405 ] 00:07:45.405 } 00:07:45.405 [2024-07-11 05:54:01.135451] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:45.405 [2024-07-11 05:54:01.135812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64426 ] 00:07:45.405 [2024-07-11 05:54:01.293184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.663 [2024-07-11 05:54:01.473279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.921 [2024-07-11 05:54:01.625561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.858  Copying: 48/48 [kB] (average 46 MBps) 00:07:46.858 00:07:47.117 05:54:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:47.117 05:54:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:47.117 05:54:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.117 05:54:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.117 { 00:07:47.117 "subsystems": [ 00:07:47.117 { 00:07:47.117 "subsystem": "bdev", 00:07:47.117 "config": [ 00:07:47.117 { 00:07:47.117 "params": { 00:07:47.117 "trtype": "pcie", 00:07:47.117 "traddr": "0000:00:10.0", 00:07:47.117 "name": "Nvme0" 00:07:47.117 }, 00:07:47.117 "method": "bdev_nvme_attach_controller" 00:07:47.117 }, 00:07:47.117 { 00:07:47.117 "method": "bdev_wait_for_examine" 00:07:47.117 } 00:07:47.117 ] 00:07:47.117 } 00:07:47.117 ] 00:07:47.117 } 00:07:47.117 [2024-07-11 05:54:02.901656] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:47.117 [2024-07-11 05:54:02.901836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64457 ] 00:07:47.376 [2024-07-11 05:54:03.069086] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.376 [2024-07-11 05:54:03.232605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.635 [2024-07-11 05:54:03.387387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.570  Copying: 48/48 [kB] (average 23 MBps) 00:07:48.570 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.570 05:54:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.570 { 00:07:48.570 "subsystems": [ 00:07:48.570 { 00:07:48.570 "subsystem": "bdev", 00:07:48.570 "config": [ 00:07:48.570 { 00:07:48.570 "params": { 00:07:48.570 "trtype": "pcie", 00:07:48.570 "traddr": "0000:00:10.0", 00:07:48.570 "name": "Nvme0" 00:07:48.570 }, 00:07:48.570 "method": "bdev_nvme_attach_controller" 00:07:48.570 }, 00:07:48.570 { 00:07:48.570 "method": "bdev_wait_for_examine" 00:07:48.570 } 00:07:48.570 ] 00:07:48.570 } 00:07:48.570 ] 00:07:48.570 } 00:07:48.570 [2024-07-11 05:54:04.449516] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:48.570 [2024-07-11 05:54:04.449685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64479 ] 00:07:48.829 [2024-07-11 05:54:04.619406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.088 [2024-07-11 05:54:04.780326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.088 [2024-07-11 05:54:04.935945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.297  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:50.297 00:07:50.297 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:50.297 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:50.297 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:50.297 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:50.297 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:50.297 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:50.297 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.865 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:50.865 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:50.865 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:50.865 05:54:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.865 { 00:07:50.865 "subsystems": [ 00:07:50.865 { 00:07:50.865 "subsystem": "bdev", 00:07:50.865 "config": [ 00:07:50.865 { 00:07:50.865 "params": { 00:07:50.865 "trtype": "pcie", 00:07:50.865 "traddr": "0000:00:10.0", 00:07:50.865 "name": "Nvme0" 00:07:50.865 }, 00:07:50.865 "method": "bdev_nvme_attach_controller" 00:07:50.865 }, 00:07:50.865 { 00:07:50.865 "method": "bdev_wait_for_examine" 00:07:50.865 } 00:07:50.865 ] 00:07:50.865 } 00:07:50.865 ] 00:07:50.865 } 00:07:50.865 [2024-07-11 05:54:06.572400] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:50.865 [2024-07-11 05:54:06.572560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64516 ] 00:07:50.865 [2024-07-11 05:54:06.722947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.124 [2024-07-11 05:54:06.892217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.124 [2024-07-11 05:54:07.036541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.318  Copying: 48/48 [kB] (average 46 MBps) 00:07:52.318 00:07:52.318 05:54:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:52.318 05:54:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:52.318 05:54:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:52.318 05:54:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.318 { 00:07:52.318 "subsystems": [ 00:07:52.318 { 00:07:52.318 "subsystem": "bdev", 00:07:52.318 "config": [ 00:07:52.318 { 00:07:52.318 "params": { 00:07:52.318 "trtype": "pcie", 00:07:52.318 "traddr": "0000:00:10.0", 00:07:52.318 "name": "Nvme0" 00:07:52.318 }, 00:07:52.318 "method": "bdev_nvme_attach_controller" 00:07:52.318 }, 00:07:52.318 { 00:07:52.318 "method": "bdev_wait_for_examine" 00:07:52.318 } 00:07:52.318 ] 00:07:52.318 } 00:07:52.318 ] 00:07:52.318 } 00:07:52.318 [2024-07-11 05:54:08.078325] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:52.318 [2024-07-11 05:54:08.078456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64541 ] 00:07:52.318 [2024-07-11 05:54:08.230488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.577 [2024-07-11 05:54:08.378570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.835 [2024-07-11 05:54:08.537257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.770  Copying: 48/48 [kB] (average 46 MBps) 00:07:53.770 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:53.770 05:54:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.029 { 00:07:54.029 "subsystems": [ 00:07:54.029 { 00:07:54.029 "subsystem": "bdev", 00:07:54.029 "config": [ 00:07:54.029 { 00:07:54.029 "params": { 00:07:54.029 "trtype": "pcie", 00:07:54.029 "traddr": "0000:00:10.0", 00:07:54.029 "name": "Nvme0" 00:07:54.029 }, 00:07:54.029 "method": "bdev_nvme_attach_controller" 00:07:54.029 }, 00:07:54.029 { 00:07:54.029 "method": "bdev_wait_for_examine" 00:07:54.029 } 00:07:54.029 ] 00:07:54.029 } 00:07:54.029 ] 00:07:54.029 } 00:07:54.029 [2024-07-11 05:54:09.756638] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:54.029 [2024-07-11 05:54:09.756995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64569 ] 00:07:54.029 [2024-07-11 05:54:09.923622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.287 [2024-07-11 05:54:10.089539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.545 [2024-07-11 05:54:10.245485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.482  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:55.482 00:07:55.482 00:07:55.482 real 0m32.099s 00:07:55.482 user 0m27.403s 00:07:55.482 sys 0m13.015s 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.482 ************************************ 00:07:55.482 END TEST dd_rw 00:07:55.482 ************************************ 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:55.482 ************************************ 00:07:55.482 START TEST dd_rw_offset 00:07:55.482 ************************************ 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:55.482 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:55.483 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=gtung55escqiiog5i3w3te88ensbwow9scz4kod6x59fn81rsp5wgpebcpr6s3gbv3mvt0tb8xzw54cr4hi32sds9zsrxgbxzii43jv6uchang1997y6smuvryroecs3yp8fi0v5dwo0xmokpeok94vbjrsdm5pntv2pcrz7sbdobhdalpgf50bgfkaedg4f9ckw07w1uqcld3mjbjdrtnndn7chsawc9ktm8jllpg77h1gp53u51xu0xu9nkgyc8u1np24gfcr1l1jrbgqg68hodz7om4vwoi9apk6gl563fzo7xjb8xvoo6zn40fxtohj5ctlwldomq7ww7k8ep9gn4xmeh0tpzmlesb6ofyyfjbjp5dzjkf0xb55ino8vrz27grhj1pf31g1d63agz8vrv4ag9k98om253a6m7sjyovssj8d6u1uwotpeapcp58toj1if3trn97dlnxalwxj2jbqdajzuxcchi2k0d5d1yeq3rd7qm8p3xzgqsz2nd4mti42z9kzywvdal07t1x0ua7ylttzdr7vlho30h1eayvrg9m40mrqr7p4ecncub6s6w2zwinqcz8g8mkp6hxte39ta6uk8u068ipekqi2fxft9ripj2x1zr1almiwhz5ftum5aj99fccrwvsujwqe2c7m7wfq7namwe8cn3ob91z6tp7muw0kt3uxubzzcohkxnpenkgzz8zk2dd7uj34w0ublkg1b0phum6ss3mfflhphs2ezcoyc9bqg1reuvnsakudxw9dhjffmf2521dykg5xl1r6t9ppms1cjup64xs8ruvuh2ql6f6zwujo7kv0cu1sqsrujju4jr7pm9s9yf1696hjvidbont3jzr7rriv3wkg0i1pynkix2wtx1ow3pvja96fmbrjgudtno1xpxf5xv6r1zvgf6t9xasrydab1ou901j2xxk7p02rwtqz9h3aatssw0u975lz4osy059cmzw3u0dc80uatxdaui0elxq3x6a1iq6q9ico6gom2zg0mfvlmowd7agx4dn0iq32cg3nd67vwwjhmg81ntcq6j6vb0sgpa2u51vo8uhfrl0cbarvb7fev49faqwopewqtoezcxqjyvk6gjpl6e0d953ebuma8tma7pkvxtn63cm14wnbwq6hslsp8sk9g9w4bg2ghhwm5n45yelczejbsrfrn4gj5i0v0gypukfztzi67js2wn77te4uuiz28xzfh8152w0n94xgcb9a7hknkw93clb5taabdvhh2ilb1a7b2a7rky3an9ttkba6cn98kybhzzwrgg7gabuhmuzopxqnm9yfl0pqbv7ppye833sswrqf0z8vn6qrg8tzywj5ajyhvw109lkchp132tya02oliq8eu83kdf19hzp3pnrkdcxrcm9usn8rqmti7a14b2rby6m4aazs4g25kc94ndndso94ydmjdif7pkqja04qpuo3c39x0ouctl4meflxc58yeebslq5z5cxogdhnnzwzinh8cu0rrlkzs5tyaqx1u6tk4zg89a15zxkvtcvrml3r5tg984h54zwd9rg8tgisadnk93fdxdgbmmjn0s7h4lhotm8o27nnm1qunsz26kbz4zwl2jvxknaiay84vzy4dqwns1wfibp148mcvdv1eiv9wmjipt7g17id3xleig8vvpr00w4sfxuitsr1vmahp8keblzixzaum5idstofu8klawt294z21icjbjg7hobbqqhfy5ehklxaz9wzb8p317of8yfzdyyph7eev116idozwmlirf3jy7y5m7abpmhfpzug1dyxe6gl7yublsqqwluwhoiwsho6vgmpcbi98giosjofc686rcs3nenlsiei2kgq0w3lnhjn5lcfkjd2jrxpk4ozdk2sbjmvbhopg602d7r33un5slupn3h63e01lt1fjr48v2au840akbt9hhl61j7457cytgx289nokp62rkwj11zk4sa4txynueute3ekbn4ybjrianlv3uwxkw3fhe53jg1oc32gvhmokgqa0zml0cnwevkm3vzsr7fcbr2kzjwfosd30kxb6sj3cr6sw2c3tianrda8k14nu1h301kl4401znjxud1z29byompfw0spf35fzr0r61pklfwhyud7mdcnb7fiw5ck75s9hxkiut9elz3tw6easiprw2qrnfg7fbvbdl6693mvf37ras2ty13wxtynrrwp7egbnxkpvl8xb6iber4j04z1ih9he5ivniqhvfbaos95s8pln7ouzgmvbtmdsz8bgbyrrt4dwqu0hhmnamv93h83c1glwfjcd8ns3nvm6oq7vtdu05et5qtn3dbcw5a8wuvlghlbyiaxnjuk1k2rpymdztq4bzjyk86h9h2011wv58tu2g4ph7mbw6y21b52b324vkiv3lr74yo0up45mhjtlmgwkyibk49locuv8hukek18dobvd3myu4icg53393ajj1ucvd4qj8mnjipfy0inca3d7ntqpw2ei7bi3wdzij106n9634cgr0na6t4eqsevahb5phmrhsodl76tu4dtici56t4thpv9fl6odqhs5nh4be6m1etzm1oxgus40m7nvuy6nwop9sge5ywl9z3yabrbuz6ymhuri3288kfi92wghyard7jlyg73pgy8gwfgb0vns5zzuqeqckld8h0m9roj1e998g4bqeyuhy5cphy4gei3ftx158tz79rju6glkxs1ld7fghkbklcy07q9ogdjulswvxaa8rhlcwaqzk97mmne92gve3ftnceqs12npzoho94yibu0zprdwou8b5g3nyet6fx3r8seg57scgkv0gpu0059dcbr2zpo4x9n77xrp0jt348cptfj3j96lt3wbx529x4d189xkonyr5ymv2fjzvrjcfqc1fsytlp8lu780c20gw0ol87ziagpwube6pwqynmh1bu8ta2mylmzb54qoy8prn8ykv66gbhuuf6u0egoasqcp9bvtb7v92b4n5x3d568jljzy94t16rq65tr0bdyycdbu51vouwd01eu3hnxc6p3wlsv6x8q1fmy9czy37vv2y5c0hs1b4bhrh1ms75szm5oz4la8r5k8kyxebzgq6ibh0zhs47kiv4y7ziqi9d6i22thimjtbo293ks3qcmx811k2nqejdosy2ocb0ji7egdwf1fbsygt7l1savhzbo5egf4dhvyu2h153apce687099ifo26kln3n4egeu2cv0g6mcguldnoz0zoxt7mpzuu2m9537bnras8jez819y9b428c5gu88g61qomiiym651mg3gwcjxti3duxptsg6zfgcjo0ibvr4pftv7jp6vyakrkd3i8fpwpogz36g6vnig82anp3jrfo9exu2nzeu2sf0w1f3zfrftpdzu99um21hhb0lp7ffjrqsyw6ts1rmji33ipptkx1t249fh8w2wvb7x0hujtnf3uv533lepop7862qdyqacl4e2vq78pot5dnf1os8dhup39ul3mdbzonwtzxh1z9tu2n9vhdqmk2tlux8g4ujx2wnb9eaip5e7e2xjj89gr0hiebbawbx3rd1w9qn63rj0ux5e98uq5j8sk0sgc2wbvf5jxfdb25c505wb32pj9pezhyu6lbcloueo2i1t673eeftvk32mwumrb7pno68h0g7rijtaozcc4bz04phfjqsl85i0hygs8fhybfg50bp5s804jczpfd3pv5ul3jowkzyukiguxbuo3akrjqu7pnbt982a0o12yfy0400plmetershho0753i25vxnkiizzwu0q851k5dlb19a2pm4fntgh0oodr2dknlhsgi6cok1k6l9duyjt1gp5xcpbwwcvtp87i8abaubd57p9uldq2vhp0m0z9iccfbkttincmo90v6gwazdijy8113y8jsyx00rgg24sus5xtzkzrpcd1faa17e8y5l033rrtl8zd6wcm804ym0au8fxfxx6utum3lvp4ny8dn0bz38wsnhb7foredivdi7tqspn0w60gp32s0zpkem 00:07:55.483 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:55.483 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:55.483 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:55.483 05:54:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:55.483 { 00:07:55.483 "subsystems": [ 00:07:55.483 { 00:07:55.483 "subsystem": "bdev", 00:07:55.483 "config": [ 00:07:55.483 { 00:07:55.483 "params": { 00:07:55.483 "trtype": "pcie", 00:07:55.483 "traddr": "0000:00:10.0", 00:07:55.483 "name": "Nvme0" 00:07:55.483 }, 00:07:55.483 "method": "bdev_nvme_attach_controller" 00:07:55.483 }, 00:07:55.483 { 00:07:55.483 "method": "bdev_wait_for_examine" 00:07:55.483 } 00:07:55.483 ] 00:07:55.483 } 00:07:55.483 ] 00:07:55.483 } 00:07:55.483 [2024-07-11 05:54:11.369574] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:55.483 [2024-07-11 05:54:11.369734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64611 ] 00:07:55.743 [2024-07-11 05:54:11.521143] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.002 [2024-07-11 05:54:11.682952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.002 [2024-07-11 05:54:11.838508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.197  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:57.197 00:07:57.197 05:54:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:57.197 05:54:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:57.197 05:54:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:57.197 05:54:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:57.197 { 00:07:57.197 "subsystems": [ 00:07:57.197 { 00:07:57.197 "subsystem": "bdev", 00:07:57.197 "config": [ 00:07:57.197 { 00:07:57.197 "params": { 00:07:57.197 "trtype": "pcie", 00:07:57.197 "traddr": "0000:00:10.0", 00:07:57.197 "name": "Nvme0" 00:07:57.197 }, 00:07:57.197 "method": "bdev_nvme_attach_controller" 00:07:57.197 }, 00:07:57.197 { 00:07:57.197 "method": "bdev_wait_for_examine" 00:07:57.197 } 00:07:57.197 ] 00:07:57.197 } 00:07:57.197 ] 00:07:57.197 } 00:07:57.197 [2024-07-11 05:54:13.053041] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:57.197 [2024-07-11 05:54:13.053228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64637 ] 00:07:57.456 [2024-07-11 05:54:13.216996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.456 [2024-07-11 05:54:13.375490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.715 [2024-07-11 05:54:13.524485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.909  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:58.909 00:07:58.909 05:54:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ gtung55escqiiog5i3w3te88ensbwow9scz4kod6x59fn81rsp5wgpebcpr6s3gbv3mvt0tb8xzw54cr4hi32sds9zsrxgbxzii43jv6uchang1997y6smuvryroecs3yp8fi0v5dwo0xmokpeok94vbjrsdm5pntv2pcrz7sbdobhdalpgf50bgfkaedg4f9ckw07w1uqcld3mjbjdrtnndn7chsawc9ktm8jllpg77h1gp53u51xu0xu9nkgyc8u1np24gfcr1l1jrbgqg68hodz7om4vwoi9apk6gl563fzo7xjb8xvoo6zn40fxtohj5ctlwldomq7ww7k8ep9gn4xmeh0tpzmlesb6ofyyfjbjp5dzjkf0xb55ino8vrz27grhj1pf31g1d63agz8vrv4ag9k98om253a6m7sjyovssj8d6u1uwotpeapcp58toj1if3trn97dlnxalwxj2jbqdajzuxcchi2k0d5d1yeq3rd7qm8p3xzgqsz2nd4mti42z9kzywvdal07t1x0ua7ylttzdr7vlho30h1eayvrg9m40mrqr7p4ecncub6s6w2zwinqcz8g8mkp6hxte39ta6uk8u068ipekqi2fxft9ripj2x1zr1almiwhz5ftum5aj99fccrwvsujwqe2c7m7wfq7namwe8cn3ob91z6tp7muw0kt3uxubzzcohkxnpenkgzz8zk2dd7uj34w0ublkg1b0phum6ss3mfflhphs2ezcoyc9bqg1reuvnsakudxw9dhjffmf2521dykg5xl1r6t9ppms1cjup64xs8ruvuh2ql6f6zwujo7kv0cu1sqsrujju4jr7pm9s9yf1696hjvidbont3jzr7rriv3wkg0i1pynkix2wtx1ow3pvja96fmbrjgudtno1xpxf5xv6r1zvgf6t9xasrydab1ou901j2xxk7p02rwtqz9h3aatssw0u975lz4osy059cmzw3u0dc80uatxdaui0elxq3x6a1iq6q9ico6gom2zg0mfvlmowd7agx4dn0iq32cg3nd67vwwjhmg81ntcq6j6vb0sgpa2u51vo8uhfrl0cbarvb7fev49faqwopewqtoezcxqjyvk6gjpl6e0d953ebuma8tma7pkvxtn63cm14wnbwq6hslsp8sk9g9w4bg2ghhwm5n45yelczejbsrfrn4gj5i0v0gypukfztzi67js2wn77te4uuiz28xzfh8152w0n94xgcb9a7hknkw93clb5taabdvhh2ilb1a7b2a7rky3an9ttkba6cn98kybhzzwrgg7gabuhmuzopxqnm9yfl0pqbv7ppye833sswrqf0z8vn6qrg8tzywj5ajyhvw109lkchp132tya02oliq8eu83kdf19hzp3pnrkdcxrcm9usn8rqmti7a14b2rby6m4aazs4g25kc94ndndso94ydmjdif7pkqja04qpuo3c39x0ouctl4meflxc58yeebslq5z5cxogdhnnzwzinh8cu0rrlkzs5tyaqx1u6tk4zg89a15zxkvtcvrml3r5tg984h54zwd9rg8tgisadnk93fdxdgbmmjn0s7h4lhotm8o27nnm1qunsz26kbz4zwl2jvxknaiay84vzy4dqwns1wfibp148mcvdv1eiv9wmjipt7g17id3xleig8vvpr00w4sfxuitsr1vmahp8keblzixzaum5idstofu8klawt294z21icjbjg7hobbqqhfy5ehklxaz9wzb8p317of8yfzdyyph7eev116idozwmlirf3jy7y5m7abpmhfpzug1dyxe6gl7yublsqqwluwhoiwsho6vgmpcbi98giosjofc686rcs3nenlsiei2kgq0w3lnhjn5lcfkjd2jrxpk4ozdk2sbjmvbhopg602d7r33un5slupn3h63e01lt1fjr48v2au840akbt9hhl61j7457cytgx289nokp62rkwj11zk4sa4txynueute3ekbn4ybjrianlv3uwxkw3fhe53jg1oc32gvhmokgqa0zml0cnwevkm3vzsr7fcbr2kzjwfosd30kxb6sj3cr6sw2c3tianrda8k14nu1h301kl4401znjxud1z29byompfw0spf35fzr0r61pklfwhyud7mdcnb7fiw5ck75s9hxkiut9elz3tw6easiprw2qrnfg7fbvbdl6693mvf37ras2ty13wxtynrrwp7egbnxkpvl8xb6iber4j04z1ih9he5ivniqhvfbaos95s8pln7ouzgmvbtmdsz8bgbyrrt4dwqu0hhmnamv93h83c1glwfjcd8ns3nvm6oq7vtdu05et5qtn3dbcw5a8wuvlghlbyiaxnjuk1k2rpymdztq4bzjyk86h9h2011wv58tu2g4ph7mbw6y21b52b324vkiv3lr74yo0up45mhjtlmgwkyibk49locuv8hukek18dobvd3myu4icg53393ajj1ucvd4qj8mnjipfy0inca3d7ntqpw2ei7bi3wdzij106n9634cgr0na6t4eqsevahb5phmrhsodl76tu4dtici56t4thpv9fl6odqhs5nh4be6m1etzm1oxgus40m7nvuy6nwop9sge5ywl9z3yabrbuz6ymhuri3288kfi92wghyard7jlyg73pgy8gwfgb0vns5zzuqeqckld8h0m9roj1e998g4bqeyuhy5cphy4gei3ftx158tz79rju6glkxs1ld7fghkbklcy07q9ogdjulswvxaa8rhlcwaqzk97mmne92gve3ftnceqs12npzoho94yibu0zprdwou8b5g3nyet6fx3r8seg57scgkv0gpu0059dcbr2zpo4x9n77xrp0jt348cptfj3j96lt3wbx529x4d189xkonyr5ymv2fjzvrjcfqc1fsytlp8lu780c20gw0ol87ziagpwube6pwqynmh1bu8ta2mylmzb54qoy8prn8ykv66gbhuuf6u0egoasqcp9bvtb7v92b4n5x3d568jljzy94t16rq65tr0bdyycdbu51vouwd01eu3hnxc6p3wlsv6x8q1fmy9czy37vv2y5c0hs1b4bhrh1ms75szm5oz4la8r5k8kyxebzgq6ibh0zhs47kiv4y7ziqi9d6i22thimjtbo293ks3qcmx811k2nqejdosy2ocb0ji7egdwf1fbsygt7l1savhzbo5egf4dhvyu2h153apce687099ifo26kln3n4egeu2cv0g6mcguldnoz0zoxt7mpzuu2m9537bnras8jez819y9b428c5gu88g61qomiiym651mg3gwcjxti3duxptsg6zfgcjo0ibvr4pftv7jp6vyakrkd3i8fpwpogz36g6vnig82anp3jrfo9exu2nzeu2sf0w1f3zfrftpdzu99um21hhb0lp7ffjrqsyw6ts1rmji33ipptkx1t249fh8w2wvb7x0hujtnf3uv533lepop7862qdyqacl4e2vq78pot5dnf1os8dhup39ul3mdbzonwtzxh1z9tu2n9vhdqmk2tlux8g4ujx2wnb9eaip5e7e2xjj89gr0hiebbawbx3rd1w9qn63rj0ux5e98uq5j8sk0sgc2wbvf5jxfdb25c505wb32pj9pezhyu6lbcloueo2i1t673eeftvk32mwumrb7pno68h0g7rijtaozcc4bz04phfjqsl85i0hygs8fhybfg50bp5s804jczpfd3pv5ul3jowkzyukiguxbuo3akrjqu7pnbt982a0o12yfy0400plmetershho0753i25vxnkiizzwu0q851k5dlb19a2pm4fntgh0oodr2dknlhsgi6cok1k6l9duyjt1gp5xcpbwwcvtp87i8abaubd57p9uldq2vhp0m0z9iccfbkttincmo90v6gwazdijy8113y8jsyx00rgg24sus5xtzkzrpcd1faa17e8y5l033rrtl8zd6wcm804ym0au8fxfxx6utum3lvp4ny8dn0bz38wsnhb7foredivdi7tqspn0w60gp32s0zpkem == \g\t\u\n\g\5\5\e\s\c\q\i\i\o\g\5\i\3\w\3\t\e\8\8\e\n\s\b\w\o\w\9\s\c\z\4\k\o\d\6\x\5\9\f\n\8\1\r\s\p\5\w\g\p\e\b\c\p\r\6\s\3\g\b\v\3\m\v\t\0\t\b\8\x\z\w\5\4\c\r\4\h\i\3\2\s\d\s\9\z\s\r\x\g\b\x\z\i\i\4\3\j\v\6\u\c\h\a\n\g\1\9\9\7\y\6\s\m\u\v\r\y\r\o\e\c\s\3\y\p\8\f\i\0\v\5\d\w\o\0\x\m\o\k\p\e\o\k\9\4\v\b\j\r\s\d\m\5\p\n\t\v\2\p\c\r\z\7\s\b\d\o\b\h\d\a\l\p\g\f\5\0\b\g\f\k\a\e\d\g\4\f\9\c\k\w\0\7\w\1\u\q\c\l\d\3\m\j\b\j\d\r\t\n\n\d\n\7\c\h\s\a\w\c\9\k\t\m\8\j\l\l\p\g\7\7\h\1\g\p\5\3\u\5\1\x\u\0\x\u\9\n\k\g\y\c\8\u\1\n\p\2\4\g\f\c\r\1\l\1\j\r\b\g\q\g\6\8\h\o\d\z\7\o\m\4\v\w\o\i\9\a\p\k\6\g\l\5\6\3\f\z\o\7\x\j\b\8\x\v\o\o\6\z\n\4\0\f\x\t\o\h\j\5\c\t\l\w\l\d\o\m\q\7\w\w\7\k\8\e\p\9\g\n\4\x\m\e\h\0\t\p\z\m\l\e\s\b\6\o\f\y\y\f\j\b\j\p\5\d\z\j\k\f\0\x\b\5\5\i\n\o\8\v\r\z\2\7\g\r\h\j\1\p\f\3\1\g\1\d\6\3\a\g\z\8\v\r\v\4\a\g\9\k\9\8\o\m\2\5\3\a\6\m\7\s\j\y\o\v\s\s\j\8\d\6\u\1\u\w\o\t\p\e\a\p\c\p\5\8\t\o\j\1\i\f\3\t\r\n\9\7\d\l\n\x\a\l\w\x\j\2\j\b\q\d\a\j\z\u\x\c\c\h\i\2\k\0\d\5\d\1\y\e\q\3\r\d\7\q\m\8\p\3\x\z\g\q\s\z\2\n\d\4\m\t\i\4\2\z\9\k\z\y\w\v\d\a\l\0\7\t\1\x\0\u\a\7\y\l\t\t\z\d\r\7\v\l\h\o\3\0\h\1\e\a\y\v\r\g\9\m\4\0\m\r\q\r\7\p\4\e\c\n\c\u\b\6\s\6\w\2\z\w\i\n\q\c\z\8\g\8\m\k\p\6\h\x\t\e\3\9\t\a\6\u\k\8\u\0\6\8\i\p\e\k\q\i\2\f\x\f\t\9\r\i\p\j\2\x\1\z\r\1\a\l\m\i\w\h\z\5\f\t\u\m\5\a\j\9\9\f\c\c\r\w\v\s\u\j\w\q\e\2\c\7\m\7\w\f\q\7\n\a\m\w\e\8\c\n\3\o\b\9\1\z\6\t\p\7\m\u\w\0\k\t\3\u\x\u\b\z\z\c\o\h\k\x\n\p\e\n\k\g\z\z\8\z\k\2\d\d\7\u\j\3\4\w\0\u\b\l\k\g\1\b\0\p\h\u\m\6\s\s\3\m\f\f\l\h\p\h\s\2\e\z\c\o\y\c\9\b\q\g\1\r\e\u\v\n\s\a\k\u\d\x\w\9\d\h\j\f\f\m\f\2\5\2\1\d\y\k\g\5\x\l\1\r\6\t\9\p\p\m\s\1\c\j\u\p\6\4\x\s\8\r\u\v\u\h\2\q\l\6\f\6\z\w\u\j\o\7\k\v\0\c\u\1\s\q\s\r\u\j\j\u\4\j\r\7\p\m\9\s\9\y\f\1\6\9\6\h\j\v\i\d\b\o\n\t\3\j\z\r\7\r\r\i\v\3\w\k\g\0\i\1\p\y\n\k\i\x\2\w\t\x\1\o\w\3\p\v\j\a\9\6\f\m\b\r\j\g\u\d\t\n\o\1\x\p\x\f\5\x\v\6\r\1\z\v\g\f\6\t\9\x\a\s\r\y\d\a\b\1\o\u\9\0\1\j\2\x\x\k\7\p\0\2\r\w\t\q\z\9\h\3\a\a\t\s\s\w\0\u\9\7\5\l\z\4\o\s\y\0\5\9\c\m\z\w\3\u\0\d\c\8\0\u\a\t\x\d\a\u\i\0\e\l\x\q\3\x\6\a\1\i\q\6\q\9\i\c\o\6\g\o\m\2\z\g\0\m\f\v\l\m\o\w\d\7\a\g\x\4\d\n\0\i\q\3\2\c\g\3\n\d\6\7\v\w\w\j\h\m\g\8\1\n\t\c\q\6\j\6\v\b\0\s\g\p\a\2\u\5\1\v\o\8\u\h\f\r\l\0\c\b\a\r\v\b\7\f\e\v\4\9\f\a\q\w\o\p\e\w\q\t\o\e\z\c\x\q\j\y\v\k\6\g\j\p\l\6\e\0\d\9\5\3\e\b\u\m\a\8\t\m\a\7\p\k\v\x\t\n\6\3\c\m\1\4\w\n\b\w\q\6\h\s\l\s\p\8\s\k\9\g\9\w\4\b\g\2\g\h\h\w\m\5\n\4\5\y\e\l\c\z\e\j\b\s\r\f\r\n\4\g\j\5\i\0\v\0\g\y\p\u\k\f\z\t\z\i\6\7\j\s\2\w\n\7\7\t\e\4\u\u\i\z\2\8\x\z\f\h\8\1\5\2\w\0\n\9\4\x\g\c\b\9\a\7\h\k\n\k\w\9\3\c\l\b\5\t\a\a\b\d\v\h\h\2\i\l\b\1\a\7\b\2\a\7\r\k\y\3\a\n\9\t\t\k\b\a\6\c\n\9\8\k\y\b\h\z\z\w\r\g\g\7\g\a\b\u\h\m\u\z\o\p\x\q\n\m\9\y\f\l\0\p\q\b\v\7\p\p\y\e\8\3\3\s\s\w\r\q\f\0\z\8\v\n\6\q\r\g\8\t\z\y\w\j\5\a\j\y\h\v\w\1\0\9\l\k\c\h\p\1\3\2\t\y\a\0\2\o\l\i\q\8\e\u\8\3\k\d\f\1\9\h\z\p\3\p\n\r\k\d\c\x\r\c\m\9\u\s\n\8\r\q\m\t\i\7\a\1\4\b\2\r\b\y\6\m\4\a\a\z\s\4\g\2\5\k\c\9\4\n\d\n\d\s\o\9\4\y\d\m\j\d\i\f\7\p\k\q\j\a\0\4\q\p\u\o\3\c\3\9\x\0\o\u\c\t\l\4\m\e\f\l\x\c\5\8\y\e\e\b\s\l\q\5\z\5\c\x\o\g\d\h\n\n\z\w\z\i\n\h\8\c\u\0\r\r\l\k\z\s\5\t\y\a\q\x\1\u\6\t\k\4\z\g\8\9\a\1\5\z\x\k\v\t\c\v\r\m\l\3\r\5\t\g\9\8\4\h\5\4\z\w\d\9\r\g\8\t\g\i\s\a\d\n\k\9\3\f\d\x\d\g\b\m\m\j\n\0\s\7\h\4\l\h\o\t\m\8\o\2\7\n\n\m\1\q\u\n\s\z\2\6\k\b\z\4\z\w\l\2\j\v\x\k\n\a\i\a\y\8\4\v\z\y\4\d\q\w\n\s\1\w\f\i\b\p\1\4\8\m\c\v\d\v\1\e\i\v\9\w\m\j\i\p\t\7\g\1\7\i\d\3\x\l\e\i\g\8\v\v\p\r\0\0\w\4\s\f\x\u\i\t\s\r\1\v\m\a\h\p\8\k\e\b\l\z\i\x\z\a\u\m\5\i\d\s\t\o\f\u\8\k\l\a\w\t\2\9\4\z\2\1\i\c\j\b\j\g\7\h\o\b\b\q\q\h\f\y\5\e\h\k\l\x\a\z\9\w\z\b\8\p\3\1\7\o\f\8\y\f\z\d\y\y\p\h\7\e\e\v\1\1\6\i\d\o\z\w\m\l\i\r\f\3\j\y\7\y\5\m\7\a\b\p\m\h\f\p\z\u\g\1\d\y\x\e\6\g\l\7\y\u\b\l\s\q\q\w\l\u\w\h\o\i\w\s\h\o\6\v\g\m\p\c\b\i\9\8\g\i\o\s\j\o\f\c\6\8\6\r\c\s\3\n\e\n\l\s\i\e\i\2\k\g\q\0\w\3\l\n\h\j\n\5\l\c\f\k\j\d\2\j\r\x\p\k\4\o\z\d\k\2\s\b\j\m\v\b\h\o\p\g\6\0\2\d\7\r\3\3\u\n\5\s\l\u\p\n\3\h\6\3\e\0\1\l\t\1\f\j\r\4\8\v\2\a\u\8\4\0\a\k\b\t\9\h\h\l\6\1\j\7\4\5\7\c\y\t\g\x\2\8\9\n\o\k\p\6\2\r\k\w\j\1\1\z\k\4\s\a\4\t\x\y\n\u\e\u\t\e\3\e\k\b\n\4\y\b\j\r\i\a\n\l\v\3\u\w\x\k\w\3\f\h\e\5\3\j\g\1\o\c\3\2\g\v\h\m\o\k\g\q\a\0\z\m\l\0\c\n\w\e\v\k\m\3\v\z\s\r\7\f\c\b\r\2\k\z\j\w\f\o\s\d\3\0\k\x\b\6\s\j\3\c\r\6\s\w\2\c\3\t\i\a\n\r\d\a\8\k\1\4\n\u\1\h\3\0\1\k\l\4\4\0\1\z\n\j\x\u\d\1\z\2\9\b\y\o\m\p\f\w\0\s\p\f\3\5\f\z\r\0\r\6\1\p\k\l\f\w\h\y\u\d\7\m\d\c\n\b\7\f\i\w\5\c\k\7\5\s\9\h\x\k\i\u\t\9\e\l\z\3\t\w\6\e\a\s\i\p\r\w\2\q\r\n\f\g\7\f\b\v\b\d\l\6\6\9\3\m\v\f\3\7\r\a\s\2\t\y\1\3\w\x\t\y\n\r\r\w\p\7\e\g\b\n\x\k\p\v\l\8\x\b\6\i\b\e\r\4\j\0\4\z\1\i\h\9\h\e\5\i\v\n\i\q\h\v\f\b\a\o\s\9\5\s\8\p\l\n\7\o\u\z\g\m\v\b\t\m\d\s\z\8\b\g\b\y\r\r\t\4\d\w\q\u\0\h\h\m\n\a\m\v\9\3\h\8\3\c\1\g\l\w\f\j\c\d\8\n\s\3\n\v\m\6\o\q\7\v\t\d\u\0\5\e\t\5\q\t\n\3\d\b\c\w\5\a\8\w\u\v\l\g\h\l\b\y\i\a\x\n\j\u\k\1\k\2\r\p\y\m\d\z\t\q\4\b\z\j\y\k\8\6\h\9\h\2\0\1\1\w\v\5\8\t\u\2\g\4\p\h\7\m\b\w\6\y\2\1\b\5\2\b\3\2\4\v\k\i\v\3\l\r\7\4\y\o\0\u\p\4\5\m\h\j\t\l\m\g\w\k\y\i\b\k\4\9\l\o\c\u\v\8\h\u\k\e\k\1\8\d\o\b\v\d\3\m\y\u\4\i\c\g\5\3\3\9\3\a\j\j\1\u\c\v\d\4\q\j\8\m\n\j\i\p\f\y\0\i\n\c\a\3\d\7\n\t\q\p\w\2\e\i\7\b\i\3\w\d\z\i\j\1\0\6\n\9\6\3\4\c\g\r\0\n\a\6\t\4\e\q\s\e\v\a\h\b\5\p\h\m\r\h\s\o\d\l\7\6\t\u\4\d\t\i\c\i\5\6\t\4\t\h\p\v\9\f\l\6\o\d\q\h\s\5\n\h\4\b\e\6\m\1\e\t\z\m\1\o\x\g\u\s\4\0\m\7\n\v\u\y\6\n\w\o\p\9\s\g\e\5\y\w\l\9\z\3\y\a\b\r\b\u\z\6\y\m\h\u\r\i\3\2\8\8\k\f\i\9\2\w\g\h\y\a\r\d\7\j\l\y\g\7\3\p\g\y\8\g\w\f\g\b\0\v\n\s\5\z\z\u\q\e\q\c\k\l\d\8\h\0\m\9\r\o\j\1\e\9\9\8\g\4\b\q\e\y\u\h\y\5\c\p\h\y\4\g\e\i\3\f\t\x\1\5\8\t\z\7\9\r\j\u\6\g\l\k\x\s\1\l\d\7\f\g\h\k\b\k\l\c\y\0\7\q\9\o\g\d\j\u\l\s\w\v\x\a\a\8\r\h\l\c\w\a\q\z\k\9\7\m\m\n\e\9\2\g\v\e\3\f\t\n\c\e\q\s\1\2\n\p\z\o\h\o\9\4\y\i\b\u\0\z\p\r\d\w\o\u\8\b\5\g\3\n\y\e\t\6\f\x\3\r\8\s\e\g\5\7\s\c\g\k\v\0\g\p\u\0\0\5\9\d\c\b\r\2\z\p\o\4\x\9\n\7\7\x\r\p\0\j\t\3\4\8\c\p\t\f\j\3\j\9\6\l\t\3\w\b\x\5\2\9\x\4\d\1\8\9\x\k\o\n\y\r\5\y\m\v\2\f\j\z\v\r\j\c\f\q\c\1\f\s\y\t\l\p\8\l\u\7\8\0\c\2\0\g\w\0\o\l\8\7\z\i\a\g\p\w\u\b\e\6\p\w\q\y\n\m\h\1\b\u\8\t\a\2\m\y\l\m\z\b\5\4\q\o\y\8\p\r\n\8\y\k\v\6\6\g\b\h\u\u\f\6\u\0\e\g\o\a\s\q\c\p\9\b\v\t\b\7\v\9\2\b\4\n\5\x\3\d\5\6\8\j\l\j\z\y\9\4\t\1\6\r\q\6\5\t\r\0\b\d\y\y\c\d\b\u\5\1\v\o\u\w\d\0\1\e\u\3\h\n\x\c\6\p\3\w\l\s\v\6\x\8\q\1\f\m\y\9\c\z\y\3\7\v\v\2\y\5\c\0\h\s\1\b\4\b\h\r\h\1\m\s\7\5\s\z\m\5\o\z\4\l\a\8\r\5\k\8\k\y\x\e\b\z\g\q\6\i\b\h\0\z\h\s\4\7\k\i\v\4\y\7\z\i\q\i\9\d\6\i\2\2\t\h\i\m\j\t\b\o\2\9\3\k\s\3\q\c\m\x\8\1\1\k\2\n\q\e\j\d\o\s\y\2\o\c\b\0\j\i\7\e\g\d\w\f\1\f\b\s\y\g\t\7\l\1\s\a\v\h\z\b\o\5\e\g\f\4\d\h\v\y\u\2\h\1\5\3\a\p\c\e\6\8\7\0\9\9\i\f\o\2\6\k\l\n\3\n\4\e\g\e\u\2\c\v\0\g\6\m\c\g\u\l\d\n\o\z\0\z\o\x\t\7\m\p\z\u\u\2\m\9\5\3\7\b\n\r\a\s\8\j\e\z\8\1\9\y\9\b\4\2\8\c\5\g\u\8\8\g\6\1\q\o\m\i\i\y\m\6\5\1\m\g\3\g\w\c\j\x\t\i\3\d\u\x\p\t\s\g\6\z\f\g\c\j\o\0\i\b\v\r\4\p\f\t\v\7\j\p\6\v\y\a\k\r\k\d\3\i\8\f\p\w\p\o\g\z\3\6\g\6\v\n\i\g\8\2\a\n\p\3\j\r\f\o\9\e\x\u\2\n\z\e\u\2\s\f\0\w\1\f\3\z\f\r\f\t\p\d\z\u\9\9\u\m\2\1\h\h\b\0\l\p\7\f\f\j\r\q\s\y\w\6\t\s\1\r\m\j\i\3\3\i\p\p\t\k\x\1\t\2\4\9\f\h\8\w\2\w\v\b\7\x\0\h\u\j\t\n\f\3\u\v\5\3\3\l\e\p\o\p\7\8\6\2\q\d\y\q\a\c\l\4\e\2\v\q\7\8\p\o\t\5\d\n\f\1\o\s\8\d\h\u\p\3\9\u\l\3\m\d\b\z\o\n\w\t\z\x\h\1\z\9\t\u\2\n\9\v\h\d\q\m\k\2\t\l\u\x\8\g\4\u\j\x\2\w\n\b\9\e\a\i\p\5\e\7\e\2\x\j\j\8\9\g\r\0\h\i\e\b\b\a\w\b\x\3\r\d\1\w\9\q\n\6\3\r\j\0\u\x\5\e\9\8\u\q\5\j\8\s\k\0\s\g\c\2\w\b\v\f\5\j\x\f\d\b\2\5\c\5\0\5\w\b\3\2\p\j\9\p\e\z\h\y\u\6\l\b\c\l\o\u\e\o\2\i\1\t\6\7\3\e\e\f\t\v\k\3\2\m\w\u\m\r\b\7\p\n\o\6\8\h\0\g\7\r\i\j\t\a\o\z\c\c\4\b\z\0\4\p\h\f\j\q\s\l\8\5\i\0\h\y\g\s\8\f\h\y\b\f\g\5\0\b\p\5\s\8\0\4\j\c\z\p\f\d\3\p\v\5\u\l\3\j\o\w\k\z\y\u\k\i\g\u\x\b\u\o\3\a\k\r\j\q\u\7\p\n\b\t\9\8\2\a\0\o\1\2\y\f\y\0\4\0\0\p\l\m\e\t\e\r\s\h\h\o\0\7\5\3\i\2\5\v\x\n\k\i\i\z\z\w\u\0\q\8\5\1\k\5\d\l\b\1\9\a\2\p\m\4\f\n\t\g\h\0\o\o\d\r\2\d\k\n\l\h\s\g\i\6\c\o\k\1\k\6\l\9\d\u\y\j\t\1\g\p\5\x\c\p\b\w\w\c\v\t\p\8\7\i\8\a\b\a\u\b\d\5\7\p\9\u\l\d\q\2\v\h\p\0\m\0\z\9\i\c\c\f\b\k\t\t\i\n\c\m\o\9\0\v\6\g\w\a\z\d\i\j\y\8\1\1\3\y\8\j\s\y\x\0\0\r\g\g\2\4\s\u\s\5\x\t\z\k\z\r\p\c\d\1\f\a\a\1\7\e\8\y\5\l\0\3\3\r\r\t\l\8\z\d\6\w\c\m\8\0\4\y\m\0\a\u\8\f\x\f\x\x\6\u\t\u\m\3\l\v\p\4\n\y\8\d\n\0\b\z\3\8\w\s\n\h\b\7\f\o\r\e\d\i\v\d\i\7\t\q\s\p\n\0\w\6\0\g\p\3\2\s\0\z\p\k\e\m ]] 00:07:58.910 ************************************ 00:07:58.910 END TEST dd_rw_offset 00:07:58.910 ************************************ 00:07:58.910 00:07:58.910 real 0m3.251s 00:07:58.910 user 0m2.753s 00:07:58.910 sys 0m1.412s 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:58.910 05:54:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:58.910 { 00:07:58.910 "subsystems": [ 00:07:58.910 { 00:07:58.910 "subsystem": "bdev", 00:07:58.910 "config": [ 00:07:58.910 { 00:07:58.910 "params": { 00:07:58.910 "trtype": "pcie", 00:07:58.910 "traddr": "0000:00:10.0", 00:07:58.910 "name": "Nvme0" 00:07:58.910 }, 00:07:58.910 "method": "bdev_nvme_attach_controller" 00:07:58.910 }, 00:07:58.910 { 00:07:58.910 "method": "bdev_wait_for_examine" 00:07:58.910 } 00:07:58.910 ] 00:07:58.910 } 00:07:58.910 ] 00:07:58.910 } 00:07:58.910 [2024-07-11 05:54:14.616579] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:07:58.910 [2024-07-11 05:54:14.616943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64678 ] 00:07:58.910 [2024-07-11 05:54:14.779056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.169 [2024-07-11 05:54:14.929142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.169 [2024-07-11 05:54:15.071599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.362  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:00.362 00:08:00.362 05:54:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.362 ************************************ 00:08:00.362 END TEST spdk_dd_basic_rw 00:08:00.362 ************************************ 00:08:00.362 00:08:00.362 real 0m39.119s 00:08:00.362 user 0m33.077s 00:08:00.362 sys 0m15.668s 00:08:00.362 05:54:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.362 05:54:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.362 05:54:16 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:00.362 05:54:16 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:00.362 05:54:16 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.362 05:54:16 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.362 05:54:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:00.362 ************************************ 00:08:00.362 START TEST spdk_dd_posix 00:08:00.362 ************************************ 00:08:00.362 05:54:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:00.621 * Looking for test storage... 00:08:00.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:00.621 * First test run, liburing in use 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:00.621 ************************************ 00:08:00.621 START TEST dd_flag_append 00:08:00.621 ************************************ 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=iwnkq3f9ud2pjlih3wa95x4jz83v39ph 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=dmzmckj60yrspin2qyckvdnjb8ffbizc 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s iwnkq3f9ud2pjlih3wa95x4jz83v39ph 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s dmzmckj60yrspin2qyckvdnjb8ffbizc 00:08:00.621 05:54:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:00.621 [2024-07-11 05:54:16.446514] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:00.621 [2024-07-11 05:54:16.446706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64749 ] 00:08:00.879 [2024-07-11 05:54:16.619779] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.879 [2024-07-11 05:54:16.775968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.136 [2024-07-11 05:54:16.926004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.510  Copying: 32/32 [B] (average 31 kBps) 00:08:02.510 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ dmzmckj60yrspin2qyckvdnjb8ffbizciwnkq3f9ud2pjlih3wa95x4jz83v39ph == \d\m\z\m\c\k\j\6\0\y\r\s\p\i\n\2\q\y\c\k\v\d\n\j\b\8\f\f\b\i\z\c\i\w\n\k\q\3\f\9\u\d\2\p\j\l\i\h\3\w\a\9\5\x\4\j\z\8\3\v\3\9\p\h ]] 00:08:02.510 00:08:02.510 real 0m1.698s 00:08:02.510 user 0m1.399s 00:08:02.510 sys 0m0.820s 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.510 ************************************ 00:08:02.510 END TEST dd_flag_append 00:08:02.510 ************************************ 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:02.510 ************************************ 00:08:02.510 START TEST dd_flag_directory 00:08:02.510 ************************************ 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.510 05:54:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.510 [2024-07-11 05:54:18.191114] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:02.510 [2024-07-11 05:54:18.191292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64795 ] 00:08:02.511 [2024-07-11 05:54:18.360039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.769 [2024-07-11 05:54:18.525086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.769 [2024-07-11 05:54:18.679850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:03.027 [2024-07-11 05:54:18.776670] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:03.027 [2024-07-11 05:54:18.776748] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:03.027 [2024-07-11 05:54:18.776789] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.594 [2024-07-11 05:54:19.346666] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.853 05:54:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:04.111 [2024-07-11 05:54:19.834347] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:04.111 [2024-07-11 05:54:19.834520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64816 ] 00:08:04.111 [2024-07-11 05:54:19.998831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.375 [2024-07-11 05:54:20.162671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.677 [2024-07-11 05:54:20.322006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.677 [2024-07-11 05:54:20.402374] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:04.677 [2024-07-11 05:54:20.402455] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:04.677 [2024-07-11 05:54:20.402498] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.249 [2024-07-11 05:54:21.019860] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:05.507 00:08:05.507 real 0m3.301s 00:08:05.507 user 0m2.713s 00:08:05.507 sys 0m0.369s 00:08:05.507 ************************************ 00:08:05.507 END TEST dd_flag_directory 00:08:05.507 ************************************ 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.507 05:54:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:05.767 ************************************ 00:08:05.767 START TEST dd_flag_nofollow 00:08:05.767 ************************************ 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.767 05:54:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.767 [2024-07-11 05:54:21.551240] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:05.767 [2024-07-11 05:54:21.551432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64857 ] 00:08:06.027 [2024-07-11 05:54:21.721986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.027 [2024-07-11 05:54:21.896438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.286 [2024-07-11 05:54:22.041445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.286 [2024-07-11 05:54:22.113457] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:06.286 [2024-07-11 05:54:22.113515] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:06.286 [2024-07-11 05:54:22.113554] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.853 [2024-07-11 05:54:22.652617] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.423 05:54:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:07.423 [2024-07-11 05:54:23.114899] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:07.423 [2024-07-11 05:54:23.115045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64884 ] 00:08:07.423 [2024-07-11 05:54:23.261906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.682 [2024-07-11 05:54:23.411538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.682 [2024-07-11 05:54:23.571039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.941 [2024-07-11 05:54:23.653065] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:07.941 [2024-07-11 05:54:23.653142] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:07.941 [2024-07-11 05:54:23.653182] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.507 [2024-07-11 05:54:24.234431] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.767 05:54:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:08:08.767 05:54:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:08.767 05:54:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:08:08.767 05:54:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:08:08.767 05:54:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:08:08.767 05:54:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:08.767 05:54:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:08.767 05:54:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:08.767 05:54:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:08.767 05:54:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.026 [2024-07-11 05:54:24.694345] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:09.026 [2024-07-11 05:54:24.694509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64898 ] 00:08:09.026 [2024-07-11 05:54:24.851704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.285 [2024-07-11 05:54:25.010749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.285 [2024-07-11 05:54:25.159980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.480  Copying: 512/512 [B] (average 500 kBps) 00:08:10.480 00:08:10.480 ************************************ 00:08:10.480 END TEST dd_flag_nofollow 00:08:10.480 ************************************ 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ u3aty4cgi20ytr1fkzge63bkbmrnoeayvj3zk10djz0bf3cegolzjqy6nwbr5elmxe20xl2k3z69ofpane6vbdef9oti7d9fmmqphdut9ac7k0voetmbu76d4s2xmpmup1601028f605wq1sj1cd093bgyztmelwcjzulqcu1anuyf2x6rgmo91vk4lzlw3ki03hyjs5q1syyzpy4lgsez23dfkot2hju0f1002v3jm331hos5doum5lzyhykgi8co2vyogh3b5hytsb87d41dx07lods4k8wglg0i03kg57s2bkug9s1gd292nakqmw6cs8ak417zh2wxeqj7nzgh8e2lqwvn3ngowsm9zu9ey4zvbfwu9dnq8l2vavczc68ee1jjii88gjl9f81rucwvne369e8ydjb2hs5dfgr09rj6wk28d9f8lmz6y3yaxjxgtuwmvk7kxi3b1f96b57rfse3n9c5e8nopnuqwhtwvk9h3fwwhikbf7cgrhimjx == \u\3\a\t\y\4\c\g\i\2\0\y\t\r\1\f\k\z\g\e\6\3\b\k\b\m\r\n\o\e\a\y\v\j\3\z\k\1\0\d\j\z\0\b\f\3\c\e\g\o\l\z\j\q\y\6\n\w\b\r\5\e\l\m\x\e\2\0\x\l\2\k\3\z\6\9\o\f\p\a\n\e\6\v\b\d\e\f\9\o\t\i\7\d\9\f\m\m\q\p\h\d\u\t\9\a\c\7\k\0\v\o\e\t\m\b\u\7\6\d\4\s\2\x\m\p\m\u\p\1\6\0\1\0\2\8\f\6\0\5\w\q\1\s\j\1\c\d\0\9\3\b\g\y\z\t\m\e\l\w\c\j\z\u\l\q\c\u\1\a\n\u\y\f\2\x\6\r\g\m\o\9\1\v\k\4\l\z\l\w\3\k\i\0\3\h\y\j\s\5\q\1\s\y\y\z\p\y\4\l\g\s\e\z\2\3\d\f\k\o\t\2\h\j\u\0\f\1\0\0\2\v\3\j\m\3\3\1\h\o\s\5\d\o\u\m\5\l\z\y\h\y\k\g\i\8\c\o\2\v\y\o\g\h\3\b\5\h\y\t\s\b\8\7\d\4\1\d\x\0\7\l\o\d\s\4\k\8\w\g\l\g\0\i\0\3\k\g\5\7\s\2\b\k\u\g\9\s\1\g\d\2\9\2\n\a\k\q\m\w\6\c\s\8\a\k\4\1\7\z\h\2\w\x\e\q\j\7\n\z\g\h\8\e\2\l\q\w\v\n\3\n\g\o\w\s\m\9\z\u\9\e\y\4\z\v\b\f\w\u\9\d\n\q\8\l\2\v\a\v\c\z\c\6\8\e\e\1\j\j\i\i\8\8\g\j\l\9\f\8\1\r\u\c\w\v\n\e\3\6\9\e\8\y\d\j\b\2\h\s\5\d\f\g\r\0\9\r\j\6\w\k\2\8\d\9\f\8\l\m\z\6\y\3\y\a\x\j\x\g\t\u\w\m\v\k\7\k\x\i\3\b\1\f\9\6\b\5\7\r\f\s\e\3\n\9\c\5\e\8\n\o\p\n\u\q\w\h\t\w\v\k\9\h\3\f\w\w\h\i\k\b\f\7\c\g\r\h\i\m\j\x ]] 00:08:10.480 00:08:10.480 real 0m4.769s 00:08:10.480 user 0m3.935s 00:08:10.480 sys 0m1.096s 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:10.480 ************************************ 00:08:10.480 START TEST dd_flag_noatime 00:08:10.480 ************************************ 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720677265 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720677266 00:08:10.480 05:54:26 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:11.417 05:54:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.676 [2024-07-11 05:54:27.386367] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:11.676 [2024-07-11 05:54:27.386548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64958 ] 00:08:11.676 [2024-07-11 05:54:27.560484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.935 [2024-07-11 05:54:27.731620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.194 [2024-07-11 05:54:27.886368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.130  Copying: 512/512 [B] (average 500 kBps) 00:08:13.130 00:08:13.130 05:54:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.130 05:54:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720677265 )) 00:08:13.130 05:54:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.130 05:54:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720677266 )) 00:08:13.130 05:54:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.389 [2024-07-11 05:54:29.050736] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:13.389 [2024-07-11 05:54:29.050915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64978 ] 00:08:13.389 [2024-07-11 05:54:29.222533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.647 [2024-07-11 05:54:29.390863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.647 [2024-07-11 05:54:29.541278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.840  Copying: 512/512 [B] (average 500 kBps) 00:08:14.840 00:08:14.840 05:54:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.840 ************************************ 00:08:14.840 END TEST dd_flag_noatime 00:08:14.840 ************************************ 00:08:14.840 05:54:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720677269 )) 00:08:14.840 00:08:14.840 real 0m4.317s 00:08:14.840 user 0m2.694s 00:08:14.840 sys 0m1.588s 00:08:14.840 05:54:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.840 05:54:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:14.840 05:54:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:14.840 05:54:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:14.841 ************************************ 00:08:14.841 START TEST dd_flags_misc 00:08:14.841 ************************************ 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:14.841 05:54:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:14.841 [2024-07-11 05:54:30.739444] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:14.841 [2024-07-11 05:54:30.739911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65024 ] 00:08:15.100 [2024-07-11 05:54:30.910707] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.360 [2024-07-11 05:54:31.072475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.360 [2024-07-11 05:54:31.228445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.552  Copying: 512/512 [B] (average 500 kBps) 00:08:16.552 00:08:16.552 05:54:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ondjg2c34xfnkk1pvrw6355jur5z85mnc53644t9py2k3dvs5wglma85328p2mueo2ckzjika503033d9sn04dessq5mspt34v5u9hlr7u4g81wzhwve8w2gr3z8q62cxtq7239ddlyqnix418ra9ky7632a03qtb3fwufc1xtl1vai4y45p26t6dhi01wx02krc4az3wf3c8pdxtxnbv395wff4vur94kpnwjnhcgarkli54bzig9a0xlq8r9oe92mkb1dwg4cqvnoupccjrglp2fcelyfyvo9tpey4nln4b0lm86nt9yjrbf3z7i4bbejmza6370zcj3f1hgzfb4ub7awr5xrv6rezho3fmfr6re25mj9kj9wcgaz7xwn22k0tm3tsvdct069ntmlmcs0o7u0y27v3u2hcg0u9zasp1vjl8b3nu17af13n8i01foo8i8hcnqmbcedg3sgt9qwgrrcexufoxbti9lw50gkms6lfte40smyvvnes106w == \o\n\d\j\g\2\c\3\4\x\f\n\k\k\1\p\v\r\w\6\3\5\5\j\u\r\5\z\8\5\m\n\c\5\3\6\4\4\t\9\p\y\2\k\3\d\v\s\5\w\g\l\m\a\8\5\3\2\8\p\2\m\u\e\o\2\c\k\z\j\i\k\a\5\0\3\0\3\3\d\9\s\n\0\4\d\e\s\s\q\5\m\s\p\t\3\4\v\5\u\9\h\l\r\7\u\4\g\8\1\w\z\h\w\v\e\8\w\2\g\r\3\z\8\q\6\2\c\x\t\q\7\2\3\9\d\d\l\y\q\n\i\x\4\1\8\r\a\9\k\y\7\6\3\2\a\0\3\q\t\b\3\f\w\u\f\c\1\x\t\l\1\v\a\i\4\y\4\5\p\2\6\t\6\d\h\i\0\1\w\x\0\2\k\r\c\4\a\z\3\w\f\3\c\8\p\d\x\t\x\n\b\v\3\9\5\w\f\f\4\v\u\r\9\4\k\p\n\w\j\n\h\c\g\a\r\k\l\i\5\4\b\z\i\g\9\a\0\x\l\q\8\r\9\o\e\9\2\m\k\b\1\d\w\g\4\c\q\v\n\o\u\p\c\c\j\r\g\l\p\2\f\c\e\l\y\f\y\v\o\9\t\p\e\y\4\n\l\n\4\b\0\l\m\8\6\n\t\9\y\j\r\b\f\3\z\7\i\4\b\b\e\j\m\z\a\6\3\7\0\z\c\j\3\f\1\h\g\z\f\b\4\u\b\7\a\w\r\5\x\r\v\6\r\e\z\h\o\3\f\m\f\r\6\r\e\2\5\m\j\9\k\j\9\w\c\g\a\z\7\x\w\n\2\2\k\0\t\m\3\t\s\v\d\c\t\0\6\9\n\t\m\l\m\c\s\0\o\7\u\0\y\2\7\v\3\u\2\h\c\g\0\u\9\z\a\s\p\1\v\j\l\8\b\3\n\u\1\7\a\f\1\3\n\8\i\0\1\f\o\o\8\i\8\h\c\n\q\m\b\c\e\d\g\3\s\g\t\9\q\w\g\r\r\c\e\x\u\f\o\x\b\t\i\9\l\w\5\0\g\k\m\s\6\l\f\t\e\4\0\s\m\y\v\v\n\e\s\1\0\6\w ]] 00:08:16.552 05:54:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:16.552 05:54:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:16.552 [2024-07-11 05:54:32.398090] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:16.552 [2024-07-11 05:54:32.398275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65046 ] 00:08:16.811 [2024-07-11 05:54:32.567134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.811 [2024-07-11 05:54:32.728228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.069 [2024-07-11 05:54:32.878690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.035  Copying: 512/512 [B] (average 500 kBps) 00:08:18.035 00:08:18.035 05:54:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ondjg2c34xfnkk1pvrw6355jur5z85mnc53644t9py2k3dvs5wglma85328p2mueo2ckzjika503033d9sn04dessq5mspt34v5u9hlr7u4g81wzhwve8w2gr3z8q62cxtq7239ddlyqnix418ra9ky7632a03qtb3fwufc1xtl1vai4y45p26t6dhi01wx02krc4az3wf3c8pdxtxnbv395wff4vur94kpnwjnhcgarkli54bzig9a0xlq8r9oe92mkb1dwg4cqvnoupccjrglp2fcelyfyvo9tpey4nln4b0lm86nt9yjrbf3z7i4bbejmza6370zcj3f1hgzfb4ub7awr5xrv6rezho3fmfr6re25mj9kj9wcgaz7xwn22k0tm3tsvdct069ntmlmcs0o7u0y27v3u2hcg0u9zasp1vjl8b3nu17af13n8i01foo8i8hcnqmbcedg3sgt9qwgrrcexufoxbti9lw50gkms6lfte40smyvvnes106w == \o\n\d\j\g\2\c\3\4\x\f\n\k\k\1\p\v\r\w\6\3\5\5\j\u\r\5\z\8\5\m\n\c\5\3\6\4\4\t\9\p\y\2\k\3\d\v\s\5\w\g\l\m\a\8\5\3\2\8\p\2\m\u\e\o\2\c\k\z\j\i\k\a\5\0\3\0\3\3\d\9\s\n\0\4\d\e\s\s\q\5\m\s\p\t\3\4\v\5\u\9\h\l\r\7\u\4\g\8\1\w\z\h\w\v\e\8\w\2\g\r\3\z\8\q\6\2\c\x\t\q\7\2\3\9\d\d\l\y\q\n\i\x\4\1\8\r\a\9\k\y\7\6\3\2\a\0\3\q\t\b\3\f\w\u\f\c\1\x\t\l\1\v\a\i\4\y\4\5\p\2\6\t\6\d\h\i\0\1\w\x\0\2\k\r\c\4\a\z\3\w\f\3\c\8\p\d\x\t\x\n\b\v\3\9\5\w\f\f\4\v\u\r\9\4\k\p\n\w\j\n\h\c\g\a\r\k\l\i\5\4\b\z\i\g\9\a\0\x\l\q\8\r\9\o\e\9\2\m\k\b\1\d\w\g\4\c\q\v\n\o\u\p\c\c\j\r\g\l\p\2\f\c\e\l\y\f\y\v\o\9\t\p\e\y\4\n\l\n\4\b\0\l\m\8\6\n\t\9\y\j\r\b\f\3\z\7\i\4\b\b\e\j\m\z\a\6\3\7\0\z\c\j\3\f\1\h\g\z\f\b\4\u\b\7\a\w\r\5\x\r\v\6\r\e\z\h\o\3\f\m\f\r\6\r\e\2\5\m\j\9\k\j\9\w\c\g\a\z\7\x\w\n\2\2\k\0\t\m\3\t\s\v\d\c\t\0\6\9\n\t\m\l\m\c\s\0\o\7\u\0\y\2\7\v\3\u\2\h\c\g\0\u\9\z\a\s\p\1\v\j\l\8\b\3\n\u\1\7\a\f\1\3\n\8\i\0\1\f\o\o\8\i\8\h\c\n\q\m\b\c\e\d\g\3\s\g\t\9\q\w\g\r\r\c\e\x\u\f\o\x\b\t\i\9\l\w\5\0\g\k\m\s\6\l\f\t\e\4\0\s\m\y\v\v\n\e\s\1\0\6\w ]] 00:08:18.035 05:54:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.035 05:54:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:18.294 [2024-07-11 05:54:34.011956] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:18.294 [2024-07-11 05:54:34.012183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65067 ] 00:08:18.294 [2024-07-11 05:54:34.183886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.553 [2024-07-11 05:54:34.330069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.813 [2024-07-11 05:54:34.485413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.748  Copying: 512/512 [B] (average 166 kBps) 00:08:19.748 00:08:19.748 05:54:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ondjg2c34xfnkk1pvrw6355jur5z85mnc53644t9py2k3dvs5wglma85328p2mueo2ckzjika503033d9sn04dessq5mspt34v5u9hlr7u4g81wzhwve8w2gr3z8q62cxtq7239ddlyqnix418ra9ky7632a03qtb3fwufc1xtl1vai4y45p26t6dhi01wx02krc4az3wf3c8pdxtxnbv395wff4vur94kpnwjnhcgarkli54bzig9a0xlq8r9oe92mkb1dwg4cqvnoupccjrglp2fcelyfyvo9tpey4nln4b0lm86nt9yjrbf3z7i4bbejmza6370zcj3f1hgzfb4ub7awr5xrv6rezho3fmfr6re25mj9kj9wcgaz7xwn22k0tm3tsvdct069ntmlmcs0o7u0y27v3u2hcg0u9zasp1vjl8b3nu17af13n8i01foo8i8hcnqmbcedg3sgt9qwgrrcexufoxbti9lw50gkms6lfte40smyvvnes106w == \o\n\d\j\g\2\c\3\4\x\f\n\k\k\1\p\v\r\w\6\3\5\5\j\u\r\5\z\8\5\m\n\c\5\3\6\4\4\t\9\p\y\2\k\3\d\v\s\5\w\g\l\m\a\8\5\3\2\8\p\2\m\u\e\o\2\c\k\z\j\i\k\a\5\0\3\0\3\3\d\9\s\n\0\4\d\e\s\s\q\5\m\s\p\t\3\4\v\5\u\9\h\l\r\7\u\4\g\8\1\w\z\h\w\v\e\8\w\2\g\r\3\z\8\q\6\2\c\x\t\q\7\2\3\9\d\d\l\y\q\n\i\x\4\1\8\r\a\9\k\y\7\6\3\2\a\0\3\q\t\b\3\f\w\u\f\c\1\x\t\l\1\v\a\i\4\y\4\5\p\2\6\t\6\d\h\i\0\1\w\x\0\2\k\r\c\4\a\z\3\w\f\3\c\8\p\d\x\t\x\n\b\v\3\9\5\w\f\f\4\v\u\r\9\4\k\p\n\w\j\n\h\c\g\a\r\k\l\i\5\4\b\z\i\g\9\a\0\x\l\q\8\r\9\o\e\9\2\m\k\b\1\d\w\g\4\c\q\v\n\o\u\p\c\c\j\r\g\l\p\2\f\c\e\l\y\f\y\v\o\9\t\p\e\y\4\n\l\n\4\b\0\l\m\8\6\n\t\9\y\j\r\b\f\3\z\7\i\4\b\b\e\j\m\z\a\6\3\7\0\z\c\j\3\f\1\h\g\z\f\b\4\u\b\7\a\w\r\5\x\r\v\6\r\e\z\h\o\3\f\m\f\r\6\r\e\2\5\m\j\9\k\j\9\w\c\g\a\z\7\x\w\n\2\2\k\0\t\m\3\t\s\v\d\c\t\0\6\9\n\t\m\l\m\c\s\0\o\7\u\0\y\2\7\v\3\u\2\h\c\g\0\u\9\z\a\s\p\1\v\j\l\8\b\3\n\u\1\7\a\f\1\3\n\8\i\0\1\f\o\o\8\i\8\h\c\n\q\m\b\c\e\d\g\3\s\g\t\9\q\w\g\r\r\c\e\x\u\f\o\x\b\t\i\9\l\w\5\0\g\k\m\s\6\l\f\t\e\4\0\s\m\y\v\v\n\e\s\1\0\6\w ]] 00:08:19.748 05:54:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.749 05:54:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:20.007 [2024-07-11 05:54:35.691967] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:20.007 [2024-07-11 05:54:35.692195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65089 ] 00:08:20.007 [2024-07-11 05:54:35.861966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.266 [2024-07-11 05:54:36.028968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.525 [2024-07-11 05:54:36.198321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:21.467  Copying: 512/512 [B] (average 250 kBps) 00:08:21.467 00:08:21.467 05:54:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ondjg2c34xfnkk1pvrw6355jur5z85mnc53644t9py2k3dvs5wglma85328p2mueo2ckzjika503033d9sn04dessq5mspt34v5u9hlr7u4g81wzhwve8w2gr3z8q62cxtq7239ddlyqnix418ra9ky7632a03qtb3fwufc1xtl1vai4y45p26t6dhi01wx02krc4az3wf3c8pdxtxnbv395wff4vur94kpnwjnhcgarkli54bzig9a0xlq8r9oe92mkb1dwg4cqvnoupccjrglp2fcelyfyvo9tpey4nln4b0lm86nt9yjrbf3z7i4bbejmza6370zcj3f1hgzfb4ub7awr5xrv6rezho3fmfr6re25mj9kj9wcgaz7xwn22k0tm3tsvdct069ntmlmcs0o7u0y27v3u2hcg0u9zasp1vjl8b3nu17af13n8i01foo8i8hcnqmbcedg3sgt9qwgrrcexufoxbti9lw50gkms6lfte40smyvvnes106w == \o\n\d\j\g\2\c\3\4\x\f\n\k\k\1\p\v\r\w\6\3\5\5\j\u\r\5\z\8\5\m\n\c\5\3\6\4\4\t\9\p\y\2\k\3\d\v\s\5\w\g\l\m\a\8\5\3\2\8\p\2\m\u\e\o\2\c\k\z\j\i\k\a\5\0\3\0\3\3\d\9\s\n\0\4\d\e\s\s\q\5\m\s\p\t\3\4\v\5\u\9\h\l\r\7\u\4\g\8\1\w\z\h\w\v\e\8\w\2\g\r\3\z\8\q\6\2\c\x\t\q\7\2\3\9\d\d\l\y\q\n\i\x\4\1\8\r\a\9\k\y\7\6\3\2\a\0\3\q\t\b\3\f\w\u\f\c\1\x\t\l\1\v\a\i\4\y\4\5\p\2\6\t\6\d\h\i\0\1\w\x\0\2\k\r\c\4\a\z\3\w\f\3\c\8\p\d\x\t\x\n\b\v\3\9\5\w\f\f\4\v\u\r\9\4\k\p\n\w\j\n\h\c\g\a\r\k\l\i\5\4\b\z\i\g\9\a\0\x\l\q\8\r\9\o\e\9\2\m\k\b\1\d\w\g\4\c\q\v\n\o\u\p\c\c\j\r\g\l\p\2\f\c\e\l\y\f\y\v\o\9\t\p\e\y\4\n\l\n\4\b\0\l\m\8\6\n\t\9\y\j\r\b\f\3\z\7\i\4\b\b\e\j\m\z\a\6\3\7\0\z\c\j\3\f\1\h\g\z\f\b\4\u\b\7\a\w\r\5\x\r\v\6\r\e\z\h\o\3\f\m\f\r\6\r\e\2\5\m\j\9\k\j\9\w\c\g\a\z\7\x\w\n\2\2\k\0\t\m\3\t\s\v\d\c\t\0\6\9\n\t\m\l\m\c\s\0\o\7\u\0\y\2\7\v\3\u\2\h\c\g\0\u\9\z\a\s\p\1\v\j\l\8\b\3\n\u\1\7\a\f\1\3\n\8\i\0\1\f\o\o\8\i\8\h\c\n\q\m\b\c\e\d\g\3\s\g\t\9\q\w\g\r\r\c\e\x\u\f\o\x\b\t\i\9\l\w\5\0\g\k\m\s\6\l\f\t\e\4\0\s\m\y\v\v\n\e\s\1\0\6\w ]] 00:08:21.467 05:54:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:21.467 05:54:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:21.467 05:54:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:21.467 05:54:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:21.467 05:54:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.467 05:54:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:21.467 [2024-07-11 05:54:37.361781] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:21.467 [2024-07-11 05:54:37.361961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65116 ] 00:08:21.725 [2024-07-11 05:54:37.530829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.983 [2024-07-11 05:54:37.688149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.983 [2024-07-11 05:54:37.837401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:23.179  Copying: 512/512 [B] (average 500 kBps) 00:08:23.179 00:08:23.179 05:54:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5fu5op5848vbc0clhazs4juycw8mx8lsoiyf1l472l42u70m8p09ovkil0kymylbqwnum0yg6keapfgsab6yazvg6y1v4yd1fze2iht1t26uhxahldcsy4qr015iftx0cujub0ceybepvh4y2x7uy8bo34upex4suz473iv5t2hc24oohxifgtyqfqr2h5z9nagyizc0adj7xiilo1brjg2p4a5mmab9nj7nl5gpk8r9ti6coicsw1ipoddnvwle84sou3sywi5c4w2k3p4t4ci19n13rp7p0adqu73qzmb9caiu6ugor32vddqsrqnlw2hghfj67drdgcz73afpx6ru75jc83kszs7cxfv5jcslew6njeu4quopwqb9alpthufq69b3r00dcuyhyezo5uz2oh7aidpoml1iseothc34ffbuab9le9m2mpvi7ro4m3bc7dbmxt8r04xvs01owvygvvbj3trnb2ci43ra11sq2ctvx6mix43ome8r7ji5 == \5\f\u\5\o\p\5\8\4\8\v\b\c\0\c\l\h\a\z\s\4\j\u\y\c\w\8\m\x\8\l\s\o\i\y\f\1\l\4\7\2\l\4\2\u\7\0\m\8\p\0\9\o\v\k\i\l\0\k\y\m\y\l\b\q\w\n\u\m\0\y\g\6\k\e\a\p\f\g\s\a\b\6\y\a\z\v\g\6\y\1\v\4\y\d\1\f\z\e\2\i\h\t\1\t\2\6\u\h\x\a\h\l\d\c\s\y\4\q\r\0\1\5\i\f\t\x\0\c\u\j\u\b\0\c\e\y\b\e\p\v\h\4\y\2\x\7\u\y\8\b\o\3\4\u\p\e\x\4\s\u\z\4\7\3\i\v\5\t\2\h\c\2\4\o\o\h\x\i\f\g\t\y\q\f\q\r\2\h\5\z\9\n\a\g\y\i\z\c\0\a\d\j\7\x\i\i\l\o\1\b\r\j\g\2\p\4\a\5\m\m\a\b\9\n\j\7\n\l\5\g\p\k\8\r\9\t\i\6\c\o\i\c\s\w\1\i\p\o\d\d\n\v\w\l\e\8\4\s\o\u\3\s\y\w\i\5\c\4\w\2\k\3\p\4\t\4\c\i\1\9\n\1\3\r\p\7\p\0\a\d\q\u\7\3\q\z\m\b\9\c\a\i\u\6\u\g\o\r\3\2\v\d\d\q\s\r\q\n\l\w\2\h\g\h\f\j\6\7\d\r\d\g\c\z\7\3\a\f\p\x\6\r\u\7\5\j\c\8\3\k\s\z\s\7\c\x\f\v\5\j\c\s\l\e\w\6\n\j\e\u\4\q\u\o\p\w\q\b\9\a\l\p\t\h\u\f\q\6\9\b\3\r\0\0\d\c\u\y\h\y\e\z\o\5\u\z\2\o\h\7\a\i\d\p\o\m\l\1\i\s\e\o\t\h\c\3\4\f\f\b\u\a\b\9\l\e\9\m\2\m\p\v\i\7\r\o\4\m\3\b\c\7\d\b\m\x\t\8\r\0\4\x\v\s\0\1\o\w\v\y\g\v\v\b\j\3\t\r\n\b\2\c\i\4\3\r\a\1\1\s\q\2\c\t\v\x\6\m\i\x\4\3\o\m\e\8\r\7\j\i\5 ]] 00:08:23.179 05:54:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.179 05:54:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:23.179 [2024-07-11 05:54:38.962481] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:23.179 [2024-07-11 05:54:38.962732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65132 ] 00:08:23.439 [2024-07-11 05:54:39.134205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.439 [2024-07-11 05:54:39.297669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.698 [2024-07-11 05:54:39.447034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:24.635  Copying: 512/512 [B] (average 500 kBps) 00:08:24.635 00:08:24.635 05:54:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5fu5op5848vbc0clhazs4juycw8mx8lsoiyf1l472l42u70m8p09ovkil0kymylbqwnum0yg6keapfgsab6yazvg6y1v4yd1fze2iht1t26uhxahldcsy4qr015iftx0cujub0ceybepvh4y2x7uy8bo34upex4suz473iv5t2hc24oohxifgtyqfqr2h5z9nagyizc0adj7xiilo1brjg2p4a5mmab9nj7nl5gpk8r9ti6coicsw1ipoddnvwle84sou3sywi5c4w2k3p4t4ci19n13rp7p0adqu73qzmb9caiu6ugor32vddqsrqnlw2hghfj67drdgcz73afpx6ru75jc83kszs7cxfv5jcslew6njeu4quopwqb9alpthufq69b3r00dcuyhyezo5uz2oh7aidpoml1iseothc34ffbuab9le9m2mpvi7ro4m3bc7dbmxt8r04xvs01owvygvvbj3trnb2ci43ra11sq2ctvx6mix43ome8r7ji5 == \5\f\u\5\o\p\5\8\4\8\v\b\c\0\c\l\h\a\z\s\4\j\u\y\c\w\8\m\x\8\l\s\o\i\y\f\1\l\4\7\2\l\4\2\u\7\0\m\8\p\0\9\o\v\k\i\l\0\k\y\m\y\l\b\q\w\n\u\m\0\y\g\6\k\e\a\p\f\g\s\a\b\6\y\a\z\v\g\6\y\1\v\4\y\d\1\f\z\e\2\i\h\t\1\t\2\6\u\h\x\a\h\l\d\c\s\y\4\q\r\0\1\5\i\f\t\x\0\c\u\j\u\b\0\c\e\y\b\e\p\v\h\4\y\2\x\7\u\y\8\b\o\3\4\u\p\e\x\4\s\u\z\4\7\3\i\v\5\t\2\h\c\2\4\o\o\h\x\i\f\g\t\y\q\f\q\r\2\h\5\z\9\n\a\g\y\i\z\c\0\a\d\j\7\x\i\i\l\o\1\b\r\j\g\2\p\4\a\5\m\m\a\b\9\n\j\7\n\l\5\g\p\k\8\r\9\t\i\6\c\o\i\c\s\w\1\i\p\o\d\d\n\v\w\l\e\8\4\s\o\u\3\s\y\w\i\5\c\4\w\2\k\3\p\4\t\4\c\i\1\9\n\1\3\r\p\7\p\0\a\d\q\u\7\3\q\z\m\b\9\c\a\i\u\6\u\g\o\r\3\2\v\d\d\q\s\r\q\n\l\w\2\h\g\h\f\j\6\7\d\r\d\g\c\z\7\3\a\f\p\x\6\r\u\7\5\j\c\8\3\k\s\z\s\7\c\x\f\v\5\j\c\s\l\e\w\6\n\j\e\u\4\q\u\o\p\w\q\b\9\a\l\p\t\h\u\f\q\6\9\b\3\r\0\0\d\c\u\y\h\y\e\z\o\5\u\z\2\o\h\7\a\i\d\p\o\m\l\1\i\s\e\o\t\h\c\3\4\f\f\b\u\a\b\9\l\e\9\m\2\m\p\v\i\7\r\o\4\m\3\b\c\7\d\b\m\x\t\8\r\0\4\x\v\s\0\1\o\w\v\y\g\v\v\b\j\3\t\r\n\b\2\c\i\4\3\r\a\1\1\s\q\2\c\t\v\x\6\m\i\x\4\3\o\m\e\8\r\7\j\i\5 ]] 00:08:24.635 05:54:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.635 05:54:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:24.895 [2024-07-11 05:54:40.605705] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:24.895 [2024-07-11 05:54:40.605868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65159 ] 00:08:24.895 [2024-07-11 05:54:40.772778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.154 [2024-07-11 05:54:40.934379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.413 [2024-07-11 05:54:41.089588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.349  Copying: 512/512 [B] (average 250 kBps) 00:08:26.349 00:08:26.349 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5fu5op5848vbc0clhazs4juycw8mx8lsoiyf1l472l42u70m8p09ovkil0kymylbqwnum0yg6keapfgsab6yazvg6y1v4yd1fze2iht1t26uhxahldcsy4qr015iftx0cujub0ceybepvh4y2x7uy8bo34upex4suz473iv5t2hc24oohxifgtyqfqr2h5z9nagyizc0adj7xiilo1brjg2p4a5mmab9nj7nl5gpk8r9ti6coicsw1ipoddnvwle84sou3sywi5c4w2k3p4t4ci19n13rp7p0adqu73qzmb9caiu6ugor32vddqsrqnlw2hghfj67drdgcz73afpx6ru75jc83kszs7cxfv5jcslew6njeu4quopwqb9alpthufq69b3r00dcuyhyezo5uz2oh7aidpoml1iseothc34ffbuab9le9m2mpvi7ro4m3bc7dbmxt8r04xvs01owvygvvbj3trnb2ci43ra11sq2ctvx6mix43ome8r7ji5 == \5\f\u\5\o\p\5\8\4\8\v\b\c\0\c\l\h\a\z\s\4\j\u\y\c\w\8\m\x\8\l\s\o\i\y\f\1\l\4\7\2\l\4\2\u\7\0\m\8\p\0\9\o\v\k\i\l\0\k\y\m\y\l\b\q\w\n\u\m\0\y\g\6\k\e\a\p\f\g\s\a\b\6\y\a\z\v\g\6\y\1\v\4\y\d\1\f\z\e\2\i\h\t\1\t\2\6\u\h\x\a\h\l\d\c\s\y\4\q\r\0\1\5\i\f\t\x\0\c\u\j\u\b\0\c\e\y\b\e\p\v\h\4\y\2\x\7\u\y\8\b\o\3\4\u\p\e\x\4\s\u\z\4\7\3\i\v\5\t\2\h\c\2\4\o\o\h\x\i\f\g\t\y\q\f\q\r\2\h\5\z\9\n\a\g\y\i\z\c\0\a\d\j\7\x\i\i\l\o\1\b\r\j\g\2\p\4\a\5\m\m\a\b\9\n\j\7\n\l\5\g\p\k\8\r\9\t\i\6\c\o\i\c\s\w\1\i\p\o\d\d\n\v\w\l\e\8\4\s\o\u\3\s\y\w\i\5\c\4\w\2\k\3\p\4\t\4\c\i\1\9\n\1\3\r\p\7\p\0\a\d\q\u\7\3\q\z\m\b\9\c\a\i\u\6\u\g\o\r\3\2\v\d\d\q\s\r\q\n\l\w\2\h\g\h\f\j\6\7\d\r\d\g\c\z\7\3\a\f\p\x\6\r\u\7\5\j\c\8\3\k\s\z\s\7\c\x\f\v\5\j\c\s\l\e\w\6\n\j\e\u\4\q\u\o\p\w\q\b\9\a\l\p\t\h\u\f\q\6\9\b\3\r\0\0\d\c\u\y\h\y\e\z\o\5\u\z\2\o\h\7\a\i\d\p\o\m\l\1\i\s\e\o\t\h\c\3\4\f\f\b\u\a\b\9\l\e\9\m\2\m\p\v\i\7\r\o\4\m\3\b\c\7\d\b\m\x\t\8\r\0\4\x\v\s\0\1\o\w\v\y\g\v\v\b\j\3\t\r\n\b\2\c\i\4\3\r\a\1\1\s\q\2\c\t\v\x\6\m\i\x\4\3\o\m\e\8\r\7\j\i\5 ]] 00:08:26.349 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.349 05:54:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:26.608 [2024-07-11 05:54:42.311889] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:26.608 [2024-07-11 05:54:42.312332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65179 ] 00:08:26.608 [2024-07-11 05:54:42.467328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.867 [2024-07-11 05:54:42.634526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.867 [2024-07-11 05:54:42.782219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:28.063  Copying: 512/512 [B] (average 250 kBps) 00:08:28.063 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5fu5op5848vbc0clhazs4juycw8mx8lsoiyf1l472l42u70m8p09ovkil0kymylbqwnum0yg6keapfgsab6yazvg6y1v4yd1fze2iht1t26uhxahldcsy4qr015iftx0cujub0ceybepvh4y2x7uy8bo34upex4suz473iv5t2hc24oohxifgtyqfqr2h5z9nagyizc0adj7xiilo1brjg2p4a5mmab9nj7nl5gpk8r9ti6coicsw1ipoddnvwle84sou3sywi5c4w2k3p4t4ci19n13rp7p0adqu73qzmb9caiu6ugor32vddqsrqnlw2hghfj67drdgcz73afpx6ru75jc83kszs7cxfv5jcslew6njeu4quopwqb9alpthufq69b3r00dcuyhyezo5uz2oh7aidpoml1iseothc34ffbuab9le9m2mpvi7ro4m3bc7dbmxt8r04xvs01owvygvvbj3trnb2ci43ra11sq2ctvx6mix43ome8r7ji5 == \5\f\u\5\o\p\5\8\4\8\v\b\c\0\c\l\h\a\z\s\4\j\u\y\c\w\8\m\x\8\l\s\o\i\y\f\1\l\4\7\2\l\4\2\u\7\0\m\8\p\0\9\o\v\k\i\l\0\k\y\m\y\l\b\q\w\n\u\m\0\y\g\6\k\e\a\p\f\g\s\a\b\6\y\a\z\v\g\6\y\1\v\4\y\d\1\f\z\e\2\i\h\t\1\t\2\6\u\h\x\a\h\l\d\c\s\y\4\q\r\0\1\5\i\f\t\x\0\c\u\j\u\b\0\c\e\y\b\e\p\v\h\4\y\2\x\7\u\y\8\b\o\3\4\u\p\e\x\4\s\u\z\4\7\3\i\v\5\t\2\h\c\2\4\o\o\h\x\i\f\g\t\y\q\f\q\r\2\h\5\z\9\n\a\g\y\i\z\c\0\a\d\j\7\x\i\i\l\o\1\b\r\j\g\2\p\4\a\5\m\m\a\b\9\n\j\7\n\l\5\g\p\k\8\r\9\t\i\6\c\o\i\c\s\w\1\i\p\o\d\d\n\v\w\l\e\8\4\s\o\u\3\s\y\w\i\5\c\4\w\2\k\3\p\4\t\4\c\i\1\9\n\1\3\r\p\7\p\0\a\d\q\u\7\3\q\z\m\b\9\c\a\i\u\6\u\g\o\r\3\2\v\d\d\q\s\r\q\n\l\w\2\h\g\h\f\j\6\7\d\r\d\g\c\z\7\3\a\f\p\x\6\r\u\7\5\j\c\8\3\k\s\z\s\7\c\x\f\v\5\j\c\s\l\e\w\6\n\j\e\u\4\q\u\o\p\w\q\b\9\a\l\p\t\h\u\f\q\6\9\b\3\r\0\0\d\c\u\y\h\y\e\z\o\5\u\z\2\o\h\7\a\i\d\p\o\m\l\1\i\s\e\o\t\h\c\3\4\f\f\b\u\a\b\9\l\e\9\m\2\m\p\v\i\7\r\o\4\m\3\b\c\7\d\b\m\x\t\8\r\0\4\x\v\s\0\1\o\w\v\y\g\v\v\b\j\3\t\r\n\b\2\c\i\4\3\r\a\1\1\s\q\2\c\t\v\x\6\m\i\x\4\3\o\m\e\8\r\7\j\i\5 ]] 00:08:28.063 00:08:28.063 real 0m13.232s 00:08:28.063 user 0m10.825s 00:08:28.063 sys 0m6.440s 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.063 ************************************ 00:08:28.063 END TEST dd_flags_misc 00:08:28.063 ************************************ 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:28.063 * Second test run, disabling liburing, forcing AIO 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:28.063 ************************************ 00:08:28.063 START TEST dd_flag_append_forced_aio 00:08:28.063 ************************************ 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=7vg704ahsr5m4lh99rtj6hbphyvgopcs 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=2egxr5o46wcg4txljqkr5o8lb7yil3ux 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 7vg704ahsr5m4lh99rtj6hbphyvgopcs 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 2egxr5o46wcg4txljqkr5o8lb7yil3ux 00:08:28.063 05:54:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:28.321 [2024-07-11 05:54:44.023373] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:28.321 [2024-07-11 05:54:44.023550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65221 ] 00:08:28.321 [2024-07-11 05:54:44.183038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.579 [2024-07-11 05:54:44.352841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.837 [2024-07-11 05:54:44.520836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.772  Copying: 32/32 [B] (average 31 kBps) 00:08:29.772 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 2egxr5o46wcg4txljqkr5o8lb7yil3ux7vg704ahsr5m4lh99rtj6hbphyvgopcs == \2\e\g\x\r\5\o\4\6\w\c\g\4\t\x\l\j\q\k\r\5\o\8\l\b\7\y\i\l\3\u\x\7\v\g\7\0\4\a\h\s\r\5\m\4\l\h\9\9\r\t\j\6\h\b\p\h\y\v\g\o\p\c\s ]] 00:08:29.772 00:08:29.772 real 0m1.686s 00:08:29.772 user 0m1.388s 00:08:29.772 sys 0m0.174s 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.772 ************************************ 00:08:29.772 END TEST dd_flag_append_forced_aio 00:08:29.772 ************************************ 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:29.772 ************************************ 00:08:29.772 START TEST dd_flag_directory_forced_aio 00:08:29.772 ************************************ 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.772 05:54:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:30.030 [2024-07-11 05:54:45.745918] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:30.030 [2024-07-11 05:54:45.746036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65265 ] 00:08:30.030 [2024-07-11 05:54:45.902221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.289 [2024-07-11 05:54:46.061948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.558 [2024-07-11 05:54:46.221409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:30.558 [2024-07-11 05:54:46.305312] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:30.558 [2024-07-11 05:54:46.305372] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:30.558 [2024-07-11 05:54:46.305411] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.158 [2024-07-11 05:54:46.936800] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.416 05:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:31.675 [2024-07-11 05:54:47.434713] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:31.675 [2024-07-11 05:54:47.434892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65281 ] 00:08:31.934 [2024-07-11 05:54:47.604152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.934 [2024-07-11 05:54:47.766556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.193 [2024-07-11 05:54:47.927046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:32.193 [2024-07-11 05:54:48.003044] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:32.193 [2024-07-11 05:54:48.003102] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:32.193 [2024-07-11 05:54:48.003141] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.761 [2024-07-11 05:54:48.608690] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:33.328 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:08:33.328 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:33.328 05:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:33.328 00:08:33.328 real 0m3.342s 00:08:33.328 user 0m2.751s 00:08:33.328 sys 0m0.368s 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:33.328 ************************************ 00:08:33.328 END TEST dd_flag_directory_forced_aio 00:08:33.328 ************************************ 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:33.328 ************************************ 00:08:33.328 START TEST dd_flag_nofollow_forced_aio 00:08:33.328 ************************************ 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.328 05:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:33.328 [2024-07-11 05:54:49.171699] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:33.329 [2024-07-11 05:54:49.171869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65327 ] 00:08:33.587 [2024-07-11 05:54:49.343177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.845 [2024-07-11 05:54:49.520688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.845 [2024-07-11 05:54:49.674621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.845 [2024-07-11 05:54:49.753412] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:33.845 [2024-07-11 05:54:49.753471] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:33.845 [2024-07-11 05:54:49.753509] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.781 [2024-07-11 05:54:50.475137] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.040 05:54:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:35.299 [2024-07-11 05:54:51.046762] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:35.299 [2024-07-11 05:54:51.046931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65354 ] 00:08:35.299 [2024-07-11 05:54:51.215874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.558 [2024-07-11 05:54:51.375982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.816 [2024-07-11 05:54:51.547109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:35.816 [2024-07-11 05:54:51.625248] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:35.816 [2024-07-11 05:54:51.625307] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:35.816 [2024-07-11 05:54:51.625346] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.383 [2024-07-11 05:54:52.226794] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:36.951 05:54:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:08:36.951 05:54:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:36.951 05:54:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:08:36.951 05:54:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:08:36.951 05:54:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:08:36.951 05:54:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:36.951 05:54:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:36.951 05:54:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:36.951 05:54:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:36.951 05:54:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:36.951 [2024-07-11 05:54:52.705194] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:36.951 [2024-07-11 05:54:52.705337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65374 ] 00:08:36.951 [2024-07-11 05:54:52.861754] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.210 [2024-07-11 05:54:53.029534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.469 [2024-07-11 05:54:53.188762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:38.404  Copying: 512/512 [B] (average 500 kBps) 00:08:38.404 00:08:38.404 ************************************ 00:08:38.404 END TEST dd_flag_nofollow_forced_aio 00:08:38.404 ************************************ 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ cratp98yrdcxfjmguk8zf0jsct5a3ecqxotflolya2cm5ld72cxm1pisr1yfgnzaiwktzilsgjjjupidexrnw032i35tu76unh2uof0blvgpyrf1cfaq71f5fy1rucacn3g2qwlgk0j5kaxsg4uqe4a9yn9mxqigefalskp2fgpf1obeep3hrm3a0j1jmlk4s6c5417lsyma323gan6x8rbjef2g8xjemddvrw1936uurf95snod5zd3amm08fitedd4dj5kz2ovv08djzgylvvtj8unon5nt16snvdfdp0b8cn45w3fxdhkkf46n8y60qld1088sh69d3alw4mft21u7m9roj6v4nsbni03m4s3bg8tf1nr01m1s8wajvdt2z7s13dgs0a5tymg6g3swi4onyufl7yze1ty15pb6u329seiml15u1lfycm06xtd67viema4hc4pikdn5awn534trmejqxirmanliuyjtzr2687aqdlmvva4xdpm89vh == \c\r\a\t\p\9\8\y\r\d\c\x\f\j\m\g\u\k\8\z\f\0\j\s\c\t\5\a\3\e\c\q\x\o\t\f\l\o\l\y\a\2\c\m\5\l\d\7\2\c\x\m\1\p\i\s\r\1\y\f\g\n\z\a\i\w\k\t\z\i\l\s\g\j\j\j\u\p\i\d\e\x\r\n\w\0\3\2\i\3\5\t\u\7\6\u\n\h\2\u\o\f\0\b\l\v\g\p\y\r\f\1\c\f\a\q\7\1\f\5\f\y\1\r\u\c\a\c\n\3\g\2\q\w\l\g\k\0\j\5\k\a\x\s\g\4\u\q\e\4\a\9\y\n\9\m\x\q\i\g\e\f\a\l\s\k\p\2\f\g\p\f\1\o\b\e\e\p\3\h\r\m\3\a\0\j\1\j\m\l\k\4\s\6\c\5\4\1\7\l\s\y\m\a\3\2\3\g\a\n\6\x\8\r\b\j\e\f\2\g\8\x\j\e\m\d\d\v\r\w\1\9\3\6\u\u\r\f\9\5\s\n\o\d\5\z\d\3\a\m\m\0\8\f\i\t\e\d\d\4\d\j\5\k\z\2\o\v\v\0\8\d\j\z\g\y\l\v\v\t\j\8\u\n\o\n\5\n\t\1\6\s\n\v\d\f\d\p\0\b\8\c\n\4\5\w\3\f\x\d\h\k\k\f\4\6\n\8\y\6\0\q\l\d\1\0\8\8\s\h\6\9\d\3\a\l\w\4\m\f\t\2\1\u\7\m\9\r\o\j\6\v\4\n\s\b\n\i\0\3\m\4\s\3\b\g\8\t\f\1\n\r\0\1\m\1\s\8\w\a\j\v\d\t\2\z\7\s\1\3\d\g\s\0\a\5\t\y\m\g\6\g\3\s\w\i\4\o\n\y\u\f\l\7\y\z\e\1\t\y\1\5\p\b\6\u\3\2\9\s\e\i\m\l\1\5\u\1\l\f\y\c\m\0\6\x\t\d\6\7\v\i\e\m\a\4\h\c\4\p\i\k\d\n\5\a\w\n\5\3\4\t\r\m\e\j\q\x\i\r\m\a\n\l\i\u\y\j\t\z\r\2\6\8\7\a\q\d\l\m\v\v\a\4\x\d\p\m\8\9\v\h ]] 00:08:38.405 00:08:38.405 real 0m5.212s 00:08:38.405 user 0m4.319s 00:08:38.405 sys 0m0.545s 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:38.405 ************************************ 00:08:38.405 START TEST dd_flag_noatime_forced_aio 00:08:38.405 ************************************ 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:38.405 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:38.663 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.663 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720677293 00:08:38.663 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.663 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720677294 00:08:38.663 05:54:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:39.600 05:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:39.600 [2024-07-11 05:54:55.449293] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:39.600 [2024-07-11 05:54:55.449752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65426 ] 00:08:39.859 [2024-07-11 05:54:55.624872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.118 [2024-07-11 05:54:55.835475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.118 [2024-07-11 05:54:55.989678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:41.315  Copying: 512/512 [B] (average 500 kBps) 00:08:41.315 00:08:41.315 05:54:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:41.315 05:54:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720677293 )) 00:08:41.315 05:54:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.315 05:54:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720677294 )) 00:08:41.315 05:54:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.574 [2024-07-11 05:54:57.304677] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:41.574 [2024-07-11 05:54:57.304846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65451 ] 00:08:41.574 [2024-07-11 05:54:57.477295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.833 [2024-07-11 05:54:57.718507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.091 [2024-07-11 05:54:57.918163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:43.287  Copying: 512/512 [B] (average 500 kBps) 00:08:43.287 00:08:43.287 05:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.287 05:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720677298 )) 00:08:43.287 00:08:43.287 real 0m4.838s 00:08:43.287 user 0m3.161s 00:08:43.287 sys 0m0.420s 00:08:43.287 05:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.287 05:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:43.287 ************************************ 00:08:43.287 END TEST dd_flag_noatime_forced_aio 00:08:43.287 ************************************ 00:08:43.287 05:54:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:43.287 05:54:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:43.287 05:54:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:43.287 05:54:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.287 05:54:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:43.560 ************************************ 00:08:43.560 START TEST dd_flags_misc_forced_aio 00:08:43.560 ************************************ 00:08:43.560 05:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:08:43.560 05:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:43.560 05:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:43.560 05:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:43.560 05:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:43.560 05:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:43.560 05:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:43.560 05:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:43.560 05:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:43.560 05:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:43.560 [2024-07-11 05:54:59.328902] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:43.560 [2024-07-11 05:54:59.329077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65495 ] 00:08:43.836 [2024-07-11 05:54:59.502427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.094 [2024-07-11 05:54:59.777371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.094 [2024-07-11 05:54:59.975533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:45.732  Copying: 512/512 [B] (average 500 kBps) 00:08:45.732 00:08:45.732 05:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7sh861xc47y6z7ra38ywb65h3aho6krr4c4i6jm65lowljgpog5ckwtujxf26xlqgaf8di76z2w1gs1f0cuosbzba6jcx02ft4xb8he1bwmbb04f9j3qb2eetqslom5oai2tse1u60t1g63us67dnr3ow8vo24edhnitf8o36bhb5vtx3zqd4qaka84wui7yo3ce13rr5yc8kpl01eeyapwtmao70a0szg5vrrq4668j5irrgj9w5iueh7upq47lbjqngu902e61thqisbjqf1npnf9mon1qp1sg58p2zdx5fa8t32abdn85ayw91qztgh9r6v7gffw8z5ixpy2ntchku3wd4994x13tkxcjsn2kajpkby4auqkqtdjerf3rfocic73j27owt66zf3ak8alxszn1ijj95jah2tsc2akit6b95wa2slnq8cxmm19tq6pcvql1o9hmujl42xch58jon6bbdgqkodphsapqhqco1wl57b65g0t58fjoostj == \7\s\h\8\6\1\x\c\4\7\y\6\z\7\r\a\3\8\y\w\b\6\5\h\3\a\h\o\6\k\r\r\4\c\4\i\6\j\m\6\5\l\o\w\l\j\g\p\o\g\5\c\k\w\t\u\j\x\f\2\6\x\l\q\g\a\f\8\d\i\7\6\z\2\w\1\g\s\1\f\0\c\u\o\s\b\z\b\a\6\j\c\x\0\2\f\t\4\x\b\8\h\e\1\b\w\m\b\b\0\4\f\9\j\3\q\b\2\e\e\t\q\s\l\o\m\5\o\a\i\2\t\s\e\1\u\6\0\t\1\g\6\3\u\s\6\7\d\n\r\3\o\w\8\v\o\2\4\e\d\h\n\i\t\f\8\o\3\6\b\h\b\5\v\t\x\3\z\q\d\4\q\a\k\a\8\4\w\u\i\7\y\o\3\c\e\1\3\r\r\5\y\c\8\k\p\l\0\1\e\e\y\a\p\w\t\m\a\o\7\0\a\0\s\z\g\5\v\r\r\q\4\6\6\8\j\5\i\r\r\g\j\9\w\5\i\u\e\h\7\u\p\q\4\7\l\b\j\q\n\g\u\9\0\2\e\6\1\t\h\q\i\s\b\j\q\f\1\n\p\n\f\9\m\o\n\1\q\p\1\s\g\5\8\p\2\z\d\x\5\f\a\8\t\3\2\a\b\d\n\8\5\a\y\w\9\1\q\z\t\g\h\9\r\6\v\7\g\f\f\w\8\z\5\i\x\p\y\2\n\t\c\h\k\u\3\w\d\4\9\9\4\x\1\3\t\k\x\c\j\s\n\2\k\a\j\p\k\b\y\4\a\u\q\k\q\t\d\j\e\r\f\3\r\f\o\c\i\c\7\3\j\2\7\o\w\t\6\6\z\f\3\a\k\8\a\l\x\s\z\n\1\i\j\j\9\5\j\a\h\2\t\s\c\2\a\k\i\t\6\b\9\5\w\a\2\s\l\n\q\8\c\x\m\m\1\9\t\q\6\p\c\v\q\l\1\o\9\h\m\u\j\l\4\2\x\c\h\5\8\j\o\n\6\b\b\d\g\q\k\o\d\p\h\s\a\p\q\h\q\c\o\1\w\l\5\7\b\6\5\g\0\t\5\8\f\j\o\o\s\t\j ]] 00:08:45.732 05:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:45.732 05:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:45.732 [2024-07-11 05:55:01.366690] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:45.732 [2024-07-11 05:55:01.366860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65520 ] 00:08:45.732 [2024-07-11 05:55:01.535585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.991 [2024-07-11 05:55:01.706288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.991 [2024-07-11 05:55:01.866268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.184  Copying: 512/512 [B] (average 500 kBps) 00:08:47.184 00:08:47.184 05:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7sh861xc47y6z7ra38ywb65h3aho6krr4c4i6jm65lowljgpog5ckwtujxf26xlqgaf8di76z2w1gs1f0cuosbzba6jcx02ft4xb8he1bwmbb04f9j3qb2eetqslom5oai2tse1u60t1g63us67dnr3ow8vo24edhnitf8o36bhb5vtx3zqd4qaka84wui7yo3ce13rr5yc8kpl01eeyapwtmao70a0szg5vrrq4668j5irrgj9w5iueh7upq47lbjqngu902e61thqisbjqf1npnf9mon1qp1sg58p2zdx5fa8t32abdn85ayw91qztgh9r6v7gffw8z5ixpy2ntchku3wd4994x13tkxcjsn2kajpkby4auqkqtdjerf3rfocic73j27owt66zf3ak8alxszn1ijj95jah2tsc2akit6b95wa2slnq8cxmm19tq6pcvql1o9hmujl42xch58jon6bbdgqkodphsapqhqco1wl57b65g0t58fjoostj == \7\s\h\8\6\1\x\c\4\7\y\6\z\7\r\a\3\8\y\w\b\6\5\h\3\a\h\o\6\k\r\r\4\c\4\i\6\j\m\6\5\l\o\w\l\j\g\p\o\g\5\c\k\w\t\u\j\x\f\2\6\x\l\q\g\a\f\8\d\i\7\6\z\2\w\1\g\s\1\f\0\c\u\o\s\b\z\b\a\6\j\c\x\0\2\f\t\4\x\b\8\h\e\1\b\w\m\b\b\0\4\f\9\j\3\q\b\2\e\e\t\q\s\l\o\m\5\o\a\i\2\t\s\e\1\u\6\0\t\1\g\6\3\u\s\6\7\d\n\r\3\o\w\8\v\o\2\4\e\d\h\n\i\t\f\8\o\3\6\b\h\b\5\v\t\x\3\z\q\d\4\q\a\k\a\8\4\w\u\i\7\y\o\3\c\e\1\3\r\r\5\y\c\8\k\p\l\0\1\e\e\y\a\p\w\t\m\a\o\7\0\a\0\s\z\g\5\v\r\r\q\4\6\6\8\j\5\i\r\r\g\j\9\w\5\i\u\e\h\7\u\p\q\4\7\l\b\j\q\n\g\u\9\0\2\e\6\1\t\h\q\i\s\b\j\q\f\1\n\p\n\f\9\m\o\n\1\q\p\1\s\g\5\8\p\2\z\d\x\5\f\a\8\t\3\2\a\b\d\n\8\5\a\y\w\9\1\q\z\t\g\h\9\r\6\v\7\g\f\f\w\8\z\5\i\x\p\y\2\n\t\c\h\k\u\3\w\d\4\9\9\4\x\1\3\t\k\x\c\j\s\n\2\k\a\j\p\k\b\y\4\a\u\q\k\q\t\d\j\e\r\f\3\r\f\o\c\i\c\7\3\j\2\7\o\w\t\6\6\z\f\3\a\k\8\a\l\x\s\z\n\1\i\j\j\9\5\j\a\h\2\t\s\c\2\a\k\i\t\6\b\9\5\w\a\2\s\l\n\q\8\c\x\m\m\1\9\t\q\6\p\c\v\q\l\1\o\9\h\m\u\j\l\4\2\x\c\h\5\8\j\o\n\6\b\b\d\g\q\k\o\d\p\h\s\a\p\q\h\q\c\o\1\w\l\5\7\b\6\5\g\0\t\5\8\f\j\o\o\s\t\j ]] 00:08:47.184 05:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.184 05:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:47.184 [2024-07-11 05:55:03.000198] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:47.184 [2024-07-11 05:55:03.000394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65544 ] 00:08:47.443 [2024-07-11 05:55:03.170124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.443 [2024-07-11 05:55:03.332117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.702 [2024-07-11 05:55:03.491666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:49.073  Copying: 512/512 [B] (average 166 kBps) 00:08:49.073 00:08:49.073 05:55:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7sh861xc47y6z7ra38ywb65h3aho6krr4c4i6jm65lowljgpog5ckwtujxf26xlqgaf8di76z2w1gs1f0cuosbzba6jcx02ft4xb8he1bwmbb04f9j3qb2eetqslom5oai2tse1u60t1g63us67dnr3ow8vo24edhnitf8o36bhb5vtx3zqd4qaka84wui7yo3ce13rr5yc8kpl01eeyapwtmao70a0szg5vrrq4668j5irrgj9w5iueh7upq47lbjqngu902e61thqisbjqf1npnf9mon1qp1sg58p2zdx5fa8t32abdn85ayw91qztgh9r6v7gffw8z5ixpy2ntchku3wd4994x13tkxcjsn2kajpkby4auqkqtdjerf3rfocic73j27owt66zf3ak8alxszn1ijj95jah2tsc2akit6b95wa2slnq8cxmm19tq6pcvql1o9hmujl42xch58jon6bbdgqkodphsapqhqco1wl57b65g0t58fjoostj == \7\s\h\8\6\1\x\c\4\7\y\6\z\7\r\a\3\8\y\w\b\6\5\h\3\a\h\o\6\k\r\r\4\c\4\i\6\j\m\6\5\l\o\w\l\j\g\p\o\g\5\c\k\w\t\u\j\x\f\2\6\x\l\q\g\a\f\8\d\i\7\6\z\2\w\1\g\s\1\f\0\c\u\o\s\b\z\b\a\6\j\c\x\0\2\f\t\4\x\b\8\h\e\1\b\w\m\b\b\0\4\f\9\j\3\q\b\2\e\e\t\q\s\l\o\m\5\o\a\i\2\t\s\e\1\u\6\0\t\1\g\6\3\u\s\6\7\d\n\r\3\o\w\8\v\o\2\4\e\d\h\n\i\t\f\8\o\3\6\b\h\b\5\v\t\x\3\z\q\d\4\q\a\k\a\8\4\w\u\i\7\y\o\3\c\e\1\3\r\r\5\y\c\8\k\p\l\0\1\e\e\y\a\p\w\t\m\a\o\7\0\a\0\s\z\g\5\v\r\r\q\4\6\6\8\j\5\i\r\r\g\j\9\w\5\i\u\e\h\7\u\p\q\4\7\l\b\j\q\n\g\u\9\0\2\e\6\1\t\h\q\i\s\b\j\q\f\1\n\p\n\f\9\m\o\n\1\q\p\1\s\g\5\8\p\2\z\d\x\5\f\a\8\t\3\2\a\b\d\n\8\5\a\y\w\9\1\q\z\t\g\h\9\r\6\v\7\g\f\f\w\8\z\5\i\x\p\y\2\n\t\c\h\k\u\3\w\d\4\9\9\4\x\1\3\t\k\x\c\j\s\n\2\k\a\j\p\k\b\y\4\a\u\q\k\q\t\d\j\e\r\f\3\r\f\o\c\i\c\7\3\j\2\7\o\w\t\6\6\z\f\3\a\k\8\a\l\x\s\z\n\1\i\j\j\9\5\j\a\h\2\t\s\c\2\a\k\i\t\6\b\9\5\w\a\2\s\l\n\q\8\c\x\m\m\1\9\t\q\6\p\c\v\q\l\1\o\9\h\m\u\j\l\4\2\x\c\h\5\8\j\o\n\6\b\b\d\g\q\k\o\d\p\h\s\a\p\q\h\q\c\o\1\w\l\5\7\b\6\5\g\0\t\5\8\f\j\o\o\s\t\j ]] 00:08:49.073 05:55:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.073 05:55:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:49.073 [2024-07-11 05:55:04.678153] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:49.073 [2024-07-11 05:55:04.678330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65559 ] 00:08:49.073 [2024-07-11 05:55:04.848153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.332 [2024-07-11 05:55:05.012812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.332 [2024-07-11 05:55:05.174717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:50.527  Copying: 512/512 [B] (average 500 kBps) 00:08:50.527 00:08:50.527 05:55:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7sh861xc47y6z7ra38ywb65h3aho6krr4c4i6jm65lowljgpog5ckwtujxf26xlqgaf8di76z2w1gs1f0cuosbzba6jcx02ft4xb8he1bwmbb04f9j3qb2eetqslom5oai2tse1u60t1g63us67dnr3ow8vo24edhnitf8o36bhb5vtx3zqd4qaka84wui7yo3ce13rr5yc8kpl01eeyapwtmao70a0szg5vrrq4668j5irrgj9w5iueh7upq47lbjqngu902e61thqisbjqf1npnf9mon1qp1sg58p2zdx5fa8t32abdn85ayw91qztgh9r6v7gffw8z5ixpy2ntchku3wd4994x13tkxcjsn2kajpkby4auqkqtdjerf3rfocic73j27owt66zf3ak8alxszn1ijj95jah2tsc2akit6b95wa2slnq8cxmm19tq6pcvql1o9hmujl42xch58jon6bbdgqkodphsapqhqco1wl57b65g0t58fjoostj == \7\s\h\8\6\1\x\c\4\7\y\6\z\7\r\a\3\8\y\w\b\6\5\h\3\a\h\o\6\k\r\r\4\c\4\i\6\j\m\6\5\l\o\w\l\j\g\p\o\g\5\c\k\w\t\u\j\x\f\2\6\x\l\q\g\a\f\8\d\i\7\6\z\2\w\1\g\s\1\f\0\c\u\o\s\b\z\b\a\6\j\c\x\0\2\f\t\4\x\b\8\h\e\1\b\w\m\b\b\0\4\f\9\j\3\q\b\2\e\e\t\q\s\l\o\m\5\o\a\i\2\t\s\e\1\u\6\0\t\1\g\6\3\u\s\6\7\d\n\r\3\o\w\8\v\o\2\4\e\d\h\n\i\t\f\8\o\3\6\b\h\b\5\v\t\x\3\z\q\d\4\q\a\k\a\8\4\w\u\i\7\y\o\3\c\e\1\3\r\r\5\y\c\8\k\p\l\0\1\e\e\y\a\p\w\t\m\a\o\7\0\a\0\s\z\g\5\v\r\r\q\4\6\6\8\j\5\i\r\r\g\j\9\w\5\i\u\e\h\7\u\p\q\4\7\l\b\j\q\n\g\u\9\0\2\e\6\1\t\h\q\i\s\b\j\q\f\1\n\p\n\f\9\m\o\n\1\q\p\1\s\g\5\8\p\2\z\d\x\5\f\a\8\t\3\2\a\b\d\n\8\5\a\y\w\9\1\q\z\t\g\h\9\r\6\v\7\g\f\f\w\8\z\5\i\x\p\y\2\n\t\c\h\k\u\3\w\d\4\9\9\4\x\1\3\t\k\x\c\j\s\n\2\k\a\j\p\k\b\y\4\a\u\q\k\q\t\d\j\e\r\f\3\r\f\o\c\i\c\7\3\j\2\7\o\w\t\6\6\z\f\3\a\k\8\a\l\x\s\z\n\1\i\j\j\9\5\j\a\h\2\t\s\c\2\a\k\i\t\6\b\9\5\w\a\2\s\l\n\q\8\c\x\m\m\1\9\t\q\6\p\c\v\q\l\1\o\9\h\m\u\j\l\4\2\x\c\h\5\8\j\o\n\6\b\b\d\g\q\k\o\d\p\h\s\a\p\q\h\q\c\o\1\w\l\5\7\b\6\5\g\0\t\5\8\f\j\o\o\s\t\j ]] 00:08:50.527 05:55:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:50.527 05:55:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:50.527 05:55:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:50.527 05:55:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:50.527 05:55:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:50.527 05:55:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:50.527 [2024-07-11 05:55:06.368102] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:50.527 [2024-07-11 05:55:06.368285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65584 ] 00:08:50.786 [2024-07-11 05:55:06.542932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.045 [2024-07-11 05:55:06.774187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.045 [2024-07-11 05:55:06.932966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.239  Copying: 512/512 [B] (average 500 kBps) 00:08:52.239 00:08:52.239 05:55:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1uwdnp1mxx4fwv8llg9f523p9bawrv1ftw86nb860bh94z4maolgzz2rk3rtoopeon635kbf5z8zh5xbc1q0jn4uv6swsa4zuutc05v7b2gh1ld2av7qqlw48jrtfcdibwn8kgyqcvrcj1e94toiwruunuwiouf21wzec8yacmlo9i0zl0wxewrq10sgk4h6gkws9cefksk1kq1g4vhuqp4fxduor8i7gkebfp0zy7mohy0s8llrnhq8fgongql3j6oqmusdrfqme44ol0q77yziz7ja0udp5wrc3f4350btzg8p9bvavknlwkw0fiyqmjfuwc5epkx0a22pvg2wn0rjur3jvz49qf0j6lse9jwnu6yqy8dgzdgssrfpfgdqcxqwmhpepahuabjxhzsvy6mkyh5cdtp5897u3sxbxe605tx0o11a6xm5y3qill860atpuwyt27nh8vghw6edsd5iw73vlqfxcprld4xqrkeuotubods0wpxwmsgy7mlt == \1\u\w\d\n\p\1\m\x\x\4\f\w\v\8\l\l\g\9\f\5\2\3\p\9\b\a\w\r\v\1\f\t\w\8\6\n\b\8\6\0\b\h\9\4\z\4\m\a\o\l\g\z\z\2\r\k\3\r\t\o\o\p\e\o\n\6\3\5\k\b\f\5\z\8\z\h\5\x\b\c\1\q\0\j\n\4\u\v\6\s\w\s\a\4\z\u\u\t\c\0\5\v\7\b\2\g\h\1\l\d\2\a\v\7\q\q\l\w\4\8\j\r\t\f\c\d\i\b\w\n\8\k\g\y\q\c\v\r\c\j\1\e\9\4\t\o\i\w\r\u\u\n\u\w\i\o\u\f\2\1\w\z\e\c\8\y\a\c\m\l\o\9\i\0\z\l\0\w\x\e\w\r\q\1\0\s\g\k\4\h\6\g\k\w\s\9\c\e\f\k\s\k\1\k\q\1\g\4\v\h\u\q\p\4\f\x\d\u\o\r\8\i\7\g\k\e\b\f\p\0\z\y\7\m\o\h\y\0\s\8\l\l\r\n\h\q\8\f\g\o\n\g\q\l\3\j\6\o\q\m\u\s\d\r\f\q\m\e\4\4\o\l\0\q\7\7\y\z\i\z\7\j\a\0\u\d\p\5\w\r\c\3\f\4\3\5\0\b\t\z\g\8\p\9\b\v\a\v\k\n\l\w\k\w\0\f\i\y\q\m\j\f\u\w\c\5\e\p\k\x\0\a\2\2\p\v\g\2\w\n\0\r\j\u\r\3\j\v\z\4\9\q\f\0\j\6\l\s\e\9\j\w\n\u\6\y\q\y\8\d\g\z\d\g\s\s\r\f\p\f\g\d\q\c\x\q\w\m\h\p\e\p\a\h\u\a\b\j\x\h\z\s\v\y\6\m\k\y\h\5\c\d\t\p\5\8\9\7\u\3\s\x\b\x\e\6\0\5\t\x\0\o\1\1\a\6\x\m\5\y\3\q\i\l\l\8\6\0\a\t\p\u\w\y\t\2\7\n\h\8\v\g\h\w\6\e\d\s\d\5\i\w\7\3\v\l\q\f\x\c\p\r\l\d\4\x\q\r\k\e\u\o\t\u\b\o\d\s\0\w\p\x\w\m\s\g\y\7\m\l\t ]] 00:08:52.239 05:55:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:52.239 05:55:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:52.497 [2024-07-11 05:55:08.201005] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:52.497 [2024-07-11 05:55:08.201182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65604 ] 00:08:52.755 [2024-07-11 05:55:08.423725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.755 [2024-07-11 05:55:08.609354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.014 [2024-07-11 05:55:08.777048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:53.949  Copying: 512/512 [B] (average 500 kBps) 00:08:53.949 00:08:54.208 05:55:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1uwdnp1mxx4fwv8llg9f523p9bawrv1ftw86nb860bh94z4maolgzz2rk3rtoopeon635kbf5z8zh5xbc1q0jn4uv6swsa4zuutc05v7b2gh1ld2av7qqlw48jrtfcdibwn8kgyqcvrcj1e94toiwruunuwiouf21wzec8yacmlo9i0zl0wxewrq10sgk4h6gkws9cefksk1kq1g4vhuqp4fxduor8i7gkebfp0zy7mohy0s8llrnhq8fgongql3j6oqmusdrfqme44ol0q77yziz7ja0udp5wrc3f4350btzg8p9bvavknlwkw0fiyqmjfuwc5epkx0a22pvg2wn0rjur3jvz49qf0j6lse9jwnu6yqy8dgzdgssrfpfgdqcxqwmhpepahuabjxhzsvy6mkyh5cdtp5897u3sxbxe605tx0o11a6xm5y3qill860atpuwyt27nh8vghw6edsd5iw73vlqfxcprld4xqrkeuotubods0wpxwmsgy7mlt == \1\u\w\d\n\p\1\m\x\x\4\f\w\v\8\l\l\g\9\f\5\2\3\p\9\b\a\w\r\v\1\f\t\w\8\6\n\b\8\6\0\b\h\9\4\z\4\m\a\o\l\g\z\z\2\r\k\3\r\t\o\o\p\e\o\n\6\3\5\k\b\f\5\z\8\z\h\5\x\b\c\1\q\0\j\n\4\u\v\6\s\w\s\a\4\z\u\u\t\c\0\5\v\7\b\2\g\h\1\l\d\2\a\v\7\q\q\l\w\4\8\j\r\t\f\c\d\i\b\w\n\8\k\g\y\q\c\v\r\c\j\1\e\9\4\t\o\i\w\r\u\u\n\u\w\i\o\u\f\2\1\w\z\e\c\8\y\a\c\m\l\o\9\i\0\z\l\0\w\x\e\w\r\q\1\0\s\g\k\4\h\6\g\k\w\s\9\c\e\f\k\s\k\1\k\q\1\g\4\v\h\u\q\p\4\f\x\d\u\o\r\8\i\7\g\k\e\b\f\p\0\z\y\7\m\o\h\y\0\s\8\l\l\r\n\h\q\8\f\g\o\n\g\q\l\3\j\6\o\q\m\u\s\d\r\f\q\m\e\4\4\o\l\0\q\7\7\y\z\i\z\7\j\a\0\u\d\p\5\w\r\c\3\f\4\3\5\0\b\t\z\g\8\p\9\b\v\a\v\k\n\l\w\k\w\0\f\i\y\q\m\j\f\u\w\c\5\e\p\k\x\0\a\2\2\p\v\g\2\w\n\0\r\j\u\r\3\j\v\z\4\9\q\f\0\j\6\l\s\e\9\j\w\n\u\6\y\q\y\8\d\g\z\d\g\s\s\r\f\p\f\g\d\q\c\x\q\w\m\h\p\e\p\a\h\u\a\b\j\x\h\z\s\v\y\6\m\k\y\h\5\c\d\t\p\5\8\9\7\u\3\s\x\b\x\e\6\0\5\t\x\0\o\1\1\a\6\x\m\5\y\3\q\i\l\l\8\6\0\a\t\p\u\w\y\t\2\7\n\h\8\v\g\h\w\6\e\d\s\d\5\i\w\7\3\v\l\q\f\x\c\p\r\l\d\4\x\q\r\k\e\u\o\t\u\b\o\d\s\0\w\p\x\w\m\s\g\y\7\m\l\t ]] 00:08:54.208 05:55:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:54.208 05:55:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:54.208 [2024-07-11 05:55:09.987826] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:54.208 [2024-07-11 05:55:09.987999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65629 ] 00:08:54.466 [2024-07-11 05:55:10.159545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.466 [2024-07-11 05:55:10.314872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.725 [2024-07-11 05:55:10.474251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:55.660  Copying: 512/512 [B] (average 250 kBps) 00:08:55.660 00:08:55.660 05:55:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1uwdnp1mxx4fwv8llg9f523p9bawrv1ftw86nb860bh94z4maolgzz2rk3rtoopeon635kbf5z8zh5xbc1q0jn4uv6swsa4zuutc05v7b2gh1ld2av7qqlw48jrtfcdibwn8kgyqcvrcj1e94toiwruunuwiouf21wzec8yacmlo9i0zl0wxewrq10sgk4h6gkws9cefksk1kq1g4vhuqp4fxduor8i7gkebfp0zy7mohy0s8llrnhq8fgongql3j6oqmusdrfqme44ol0q77yziz7ja0udp5wrc3f4350btzg8p9bvavknlwkw0fiyqmjfuwc5epkx0a22pvg2wn0rjur3jvz49qf0j6lse9jwnu6yqy8dgzdgssrfpfgdqcxqwmhpepahuabjxhzsvy6mkyh5cdtp5897u3sxbxe605tx0o11a6xm5y3qill860atpuwyt27nh8vghw6edsd5iw73vlqfxcprld4xqrkeuotubods0wpxwmsgy7mlt == \1\u\w\d\n\p\1\m\x\x\4\f\w\v\8\l\l\g\9\f\5\2\3\p\9\b\a\w\r\v\1\f\t\w\8\6\n\b\8\6\0\b\h\9\4\z\4\m\a\o\l\g\z\z\2\r\k\3\r\t\o\o\p\e\o\n\6\3\5\k\b\f\5\z\8\z\h\5\x\b\c\1\q\0\j\n\4\u\v\6\s\w\s\a\4\z\u\u\t\c\0\5\v\7\b\2\g\h\1\l\d\2\a\v\7\q\q\l\w\4\8\j\r\t\f\c\d\i\b\w\n\8\k\g\y\q\c\v\r\c\j\1\e\9\4\t\o\i\w\r\u\u\n\u\w\i\o\u\f\2\1\w\z\e\c\8\y\a\c\m\l\o\9\i\0\z\l\0\w\x\e\w\r\q\1\0\s\g\k\4\h\6\g\k\w\s\9\c\e\f\k\s\k\1\k\q\1\g\4\v\h\u\q\p\4\f\x\d\u\o\r\8\i\7\g\k\e\b\f\p\0\z\y\7\m\o\h\y\0\s\8\l\l\r\n\h\q\8\f\g\o\n\g\q\l\3\j\6\o\q\m\u\s\d\r\f\q\m\e\4\4\o\l\0\q\7\7\y\z\i\z\7\j\a\0\u\d\p\5\w\r\c\3\f\4\3\5\0\b\t\z\g\8\p\9\b\v\a\v\k\n\l\w\k\w\0\f\i\y\q\m\j\f\u\w\c\5\e\p\k\x\0\a\2\2\p\v\g\2\w\n\0\r\j\u\r\3\j\v\z\4\9\q\f\0\j\6\l\s\e\9\j\w\n\u\6\y\q\y\8\d\g\z\d\g\s\s\r\f\p\f\g\d\q\c\x\q\w\m\h\p\e\p\a\h\u\a\b\j\x\h\z\s\v\y\6\m\k\y\h\5\c\d\t\p\5\8\9\7\u\3\s\x\b\x\e\6\0\5\t\x\0\o\1\1\a\6\x\m\5\y\3\q\i\l\l\8\6\0\a\t\p\u\w\y\t\2\7\n\h\8\v\g\h\w\6\e\d\s\d\5\i\w\7\3\v\l\q\f\x\c\p\r\l\d\4\x\q\r\k\e\u\o\t\u\b\o\d\s\0\w\p\x\w\m\s\g\y\7\m\l\t ]] 00:08:55.660 05:55:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:55.660 05:55:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:55.919 [2024-07-11 05:55:11.611402] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:55.920 [2024-07-11 05:55:11.611602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65643 ] 00:08:55.920 [2024-07-11 05:55:11.781608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.178 [2024-07-11 05:55:11.950465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.436 [2024-07-11 05:55:12.101375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.817  Copying: 512/512 [B] (average 500 kBps) 00:08:57.817 00:08:57.817 ************************************ 00:08:57.817 END TEST dd_flags_misc_forced_aio 00:08:57.817 ************************************ 00:08:57.817 05:55:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1uwdnp1mxx4fwv8llg9f523p9bawrv1ftw86nb860bh94z4maolgzz2rk3rtoopeon635kbf5z8zh5xbc1q0jn4uv6swsa4zuutc05v7b2gh1ld2av7qqlw48jrtfcdibwn8kgyqcvrcj1e94toiwruunuwiouf21wzec8yacmlo9i0zl0wxewrq10sgk4h6gkws9cefksk1kq1g4vhuqp4fxduor8i7gkebfp0zy7mohy0s8llrnhq8fgongql3j6oqmusdrfqme44ol0q77yziz7ja0udp5wrc3f4350btzg8p9bvavknlwkw0fiyqmjfuwc5epkx0a22pvg2wn0rjur3jvz49qf0j6lse9jwnu6yqy8dgzdgssrfpfgdqcxqwmhpepahuabjxhzsvy6mkyh5cdtp5897u3sxbxe605tx0o11a6xm5y3qill860atpuwyt27nh8vghw6edsd5iw73vlqfxcprld4xqrkeuotubods0wpxwmsgy7mlt == \1\u\w\d\n\p\1\m\x\x\4\f\w\v\8\l\l\g\9\f\5\2\3\p\9\b\a\w\r\v\1\f\t\w\8\6\n\b\8\6\0\b\h\9\4\z\4\m\a\o\l\g\z\z\2\r\k\3\r\t\o\o\p\e\o\n\6\3\5\k\b\f\5\z\8\z\h\5\x\b\c\1\q\0\j\n\4\u\v\6\s\w\s\a\4\z\u\u\t\c\0\5\v\7\b\2\g\h\1\l\d\2\a\v\7\q\q\l\w\4\8\j\r\t\f\c\d\i\b\w\n\8\k\g\y\q\c\v\r\c\j\1\e\9\4\t\o\i\w\r\u\u\n\u\w\i\o\u\f\2\1\w\z\e\c\8\y\a\c\m\l\o\9\i\0\z\l\0\w\x\e\w\r\q\1\0\s\g\k\4\h\6\g\k\w\s\9\c\e\f\k\s\k\1\k\q\1\g\4\v\h\u\q\p\4\f\x\d\u\o\r\8\i\7\g\k\e\b\f\p\0\z\y\7\m\o\h\y\0\s\8\l\l\r\n\h\q\8\f\g\o\n\g\q\l\3\j\6\o\q\m\u\s\d\r\f\q\m\e\4\4\o\l\0\q\7\7\y\z\i\z\7\j\a\0\u\d\p\5\w\r\c\3\f\4\3\5\0\b\t\z\g\8\p\9\b\v\a\v\k\n\l\w\k\w\0\f\i\y\q\m\j\f\u\w\c\5\e\p\k\x\0\a\2\2\p\v\g\2\w\n\0\r\j\u\r\3\j\v\z\4\9\q\f\0\j\6\l\s\e\9\j\w\n\u\6\y\q\y\8\d\g\z\d\g\s\s\r\f\p\f\g\d\q\c\x\q\w\m\h\p\e\p\a\h\u\a\b\j\x\h\z\s\v\y\6\m\k\y\h\5\c\d\t\p\5\8\9\7\u\3\s\x\b\x\e\6\0\5\t\x\0\o\1\1\a\6\x\m\5\y\3\q\i\l\l\8\6\0\a\t\p\u\w\y\t\2\7\n\h\8\v\g\h\w\6\e\d\s\d\5\i\w\7\3\v\l\q\f\x\c\p\r\l\d\4\x\q\r\k\e\u\o\t\u\b\o\d\s\0\w\p\x\w\m\s\g\y\7\m\l\t ]] 00:08:57.817 00:08:57.817 real 0m14.186s 00:08:57.817 user 0m11.657s 00:08:57.817 sys 0m1.534s 00:08:57.817 05:55:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.817 05:55:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:57.817 05:55:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:08:57.817 05:55:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:57.817 05:55:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:57.817 05:55:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:57.817 00:08:57.817 real 0m57.220s 00:08:57.817 user 0m45.056s 00:08:57.817 sys 0m13.710s 00:08:57.817 05:55:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.817 05:55:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:57.817 ************************************ 00:08:57.817 END TEST spdk_dd_posix 00:08:57.817 ************************************ 00:08:57.817 05:55:13 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:08:57.817 05:55:13 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:57.817 05:55:13 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:57.817 05:55:13 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.817 05:55:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:57.817 ************************************ 00:08:57.817 START TEST spdk_dd_malloc 00:08:57.817 ************************************ 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:57.817 * Looking for test storage... 00:08:57.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:57.817 ************************************ 00:08:57.817 START TEST dd_malloc_copy 00:08:57.817 ************************************ 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:57.817 05:55:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:57.817 { 00:08:57.817 "subsystems": [ 00:08:57.817 { 00:08:57.817 "subsystem": "bdev", 00:08:57.817 "config": [ 00:08:57.817 { 00:08:57.817 "params": { 00:08:57.817 "block_size": 512, 00:08:57.817 "num_blocks": 1048576, 00:08:57.817 "name": "malloc0" 00:08:57.817 }, 00:08:57.817 "method": "bdev_malloc_create" 00:08:57.817 }, 00:08:57.817 { 00:08:57.817 "params": { 00:08:57.817 "block_size": 512, 00:08:57.817 "num_blocks": 1048576, 00:08:57.817 "name": "malloc1" 00:08:57.817 }, 00:08:57.817 "method": "bdev_malloc_create" 00:08:57.817 }, 00:08:57.817 { 00:08:57.817 "method": "bdev_wait_for_examine" 00:08:57.817 } 00:08:57.817 ] 00:08:57.817 } 00:08:57.817 ] 00:08:57.817 } 00:08:57.817 [2024-07-11 05:55:13.695911] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:08:57.817 [2024-07-11 05:55:13.696122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65729 ] 00:08:58.075 [2024-07-11 05:55:13.870559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.333 [2024-07-11 05:55:14.101964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.591 [2024-07-11 05:55:14.294661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:04.905  Copying: 188/512 [MB] (188 MBps) Copying: 381/512 [MB] (192 MBps) Copying: 512/512 [MB] (average 184 MBps) 00:09:04.905 00:09:05.164 05:55:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:05.164 05:55:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:05.164 05:55:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:05.164 05:55:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:05.164 { 00:09:05.164 "subsystems": [ 00:09:05.164 { 00:09:05.164 "subsystem": "bdev", 00:09:05.164 "config": [ 00:09:05.164 { 00:09:05.164 "params": { 00:09:05.164 "block_size": 512, 00:09:05.164 "num_blocks": 1048576, 00:09:05.164 "name": "malloc0" 00:09:05.164 }, 00:09:05.164 "method": "bdev_malloc_create" 00:09:05.164 }, 00:09:05.164 { 00:09:05.164 "params": { 00:09:05.164 "block_size": 512, 00:09:05.164 "num_blocks": 1048576, 00:09:05.164 "name": "malloc1" 00:09:05.164 }, 00:09:05.164 "method": "bdev_malloc_create" 00:09:05.164 }, 00:09:05.164 { 00:09:05.164 "method": "bdev_wait_for_examine" 00:09:05.164 } 00:09:05.164 ] 00:09:05.164 } 00:09:05.164 ] 00:09:05.164 } 00:09:05.164 [2024-07-11 05:55:20.946222] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:05.165 [2024-07-11 05:55:20.946383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65816 ] 00:09:05.423 [2024-07-11 05:55:21.114899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.423 [2024-07-11 05:55:21.261999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.682 [2024-07-11 05:55:21.423030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:12.752  Copying: 173/512 [MB] (173 MBps) Copying: 350/512 [MB] (177 MBps) Copying: 512/512 [MB] (average 176 MBps) 00:09:12.752 00:09:12.752 00:09:12.752 real 0m14.669s 00:09:12.752 user 0m13.664s 00:09:12.752 sys 0m0.815s 00:09:12.752 05:55:28 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.752 05:55:28 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:12.752 ************************************ 00:09:12.752 END TEST dd_malloc_copy 00:09:12.752 ************************************ 00:09:12.752 05:55:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:09:12.752 00:09:12.752 real 0m14.808s 00:09:12.752 user 0m13.721s 00:09:12.752 sys 0m0.898s 00:09:12.752 05:55:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.752 05:55:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:12.752 ************************************ 00:09:12.752 END TEST spdk_dd_malloc 00:09:12.752 ************************************ 00:09:12.752 05:55:28 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:09:12.752 05:55:28 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:12.752 05:55:28 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:12.752 05:55:28 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.752 05:55:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:12.752 ************************************ 00:09:12.752 START TEST spdk_dd_bdev_to_bdev 00:09:12.752 ************************************ 00:09:12.752 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:12.753 * Looking for test storage... 00:09:12.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:12.753 ************************************ 00:09:12.753 START TEST dd_inflate_file 00:09:12.753 ************************************ 00:09:12.753 05:55:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:12.753 [2024-07-11 05:55:28.548671] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:12.753 [2024-07-11 05:55:28.548853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65966 ] 00:09:13.012 [2024-07-11 05:55:28.718598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.012 [2024-07-11 05:55:28.874445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.271 [2024-07-11 05:55:29.025748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.208  Copying: 64/64 [MB] (average 1641 MBps) 00:09:14.208 00:09:14.208 00:09:14.208 real 0m1.651s 00:09:14.208 user 0m1.363s 00:09:14.208 sys 0m0.845s 00:09:14.208 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.208 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:14.208 ************************************ 00:09:14.208 END TEST dd_inflate_file 00:09:14.208 ************************************ 00:09:14.467 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:09:14.467 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:14.467 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:14.467 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:14.467 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:14.467 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:14.467 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.467 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:14.467 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:14.467 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:14.467 ************************************ 00:09:14.467 START TEST dd_copy_to_out_bdev 00:09:14.467 ************************************ 00:09:14.468 05:55:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:14.468 { 00:09:14.468 "subsystems": [ 00:09:14.468 { 00:09:14.468 "subsystem": "bdev", 00:09:14.468 "config": [ 00:09:14.468 { 00:09:14.468 "params": { 00:09:14.468 "trtype": "pcie", 00:09:14.468 "traddr": "0000:00:10.0", 00:09:14.468 "name": "Nvme0" 00:09:14.468 }, 00:09:14.468 "method": "bdev_nvme_attach_controller" 00:09:14.468 }, 00:09:14.468 { 00:09:14.468 "params": { 00:09:14.468 "trtype": "pcie", 00:09:14.468 "traddr": "0000:00:11.0", 00:09:14.468 "name": "Nvme1" 00:09:14.468 }, 00:09:14.468 "method": "bdev_nvme_attach_controller" 00:09:14.468 }, 00:09:14.468 { 00:09:14.468 "method": "bdev_wait_for_examine" 00:09:14.468 } 00:09:14.468 ] 00:09:14.468 } 00:09:14.468 ] 00:09:14.468 } 00:09:14.468 [2024-07-11 05:55:30.262565] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:14.468 [2024-07-11 05:55:30.262803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66011 ] 00:09:14.726 [2024-07-11 05:55:30.431804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.726 [2024-07-11 05:55:30.586661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.985 [2024-07-11 05:55:30.740510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.786  Copying: 49/64 [MB] (49 MBps) Copying: 64/64 [MB] (average 49 MBps) 00:09:17.786 00:09:17.786 00:09:17.786 real 0m3.143s 00:09:17.786 user 0m2.843s 00:09:17.786 sys 0m2.202s 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:17.786 ************************************ 00:09:17.786 END TEST dd_copy_to_out_bdev 00:09:17.786 ************************************ 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:17.786 ************************************ 00:09:17.786 START TEST dd_offset_magic 00:09:17.786 ************************************ 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:17.786 05:55:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:17.786 { 00:09:17.786 "subsystems": [ 00:09:17.786 { 00:09:17.786 "subsystem": "bdev", 00:09:17.786 "config": [ 00:09:17.786 { 00:09:17.786 "params": { 00:09:17.786 "trtype": "pcie", 00:09:17.786 "traddr": "0000:00:10.0", 00:09:17.786 "name": "Nvme0" 00:09:17.786 }, 00:09:17.787 "method": "bdev_nvme_attach_controller" 00:09:17.787 }, 00:09:17.787 { 00:09:17.787 "params": { 00:09:17.787 "trtype": "pcie", 00:09:17.787 "traddr": "0000:00:11.0", 00:09:17.787 "name": "Nvme1" 00:09:17.787 }, 00:09:17.787 "method": "bdev_nvme_attach_controller" 00:09:17.787 }, 00:09:17.787 { 00:09:17.787 "method": "bdev_wait_for_examine" 00:09:17.787 } 00:09:17.787 ] 00:09:17.787 } 00:09:17.787 ] 00:09:17.787 } 00:09:17.787 [2024-07-11 05:55:33.443596] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:17.787 [2024-07-11 05:55:33.443820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66073 ] 00:09:17.787 [2024-07-11 05:55:33.600029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.045 [2024-07-11 05:55:33.759611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.045 [2024-07-11 05:55:33.906252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:19.239  Copying: 65/65 [MB] (average 915 MBps) 00:09:19.239 00:09:19.239 05:55:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:19.239 05:55:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:19.239 05:55:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:19.239 05:55:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:19.239 { 00:09:19.239 "subsystems": [ 00:09:19.239 { 00:09:19.239 "subsystem": "bdev", 00:09:19.239 "config": [ 00:09:19.239 { 00:09:19.239 "params": { 00:09:19.239 "trtype": "pcie", 00:09:19.239 "traddr": "0000:00:10.0", 00:09:19.239 "name": "Nvme0" 00:09:19.239 }, 00:09:19.239 "method": "bdev_nvme_attach_controller" 00:09:19.239 }, 00:09:19.239 { 00:09:19.239 "params": { 00:09:19.239 "trtype": "pcie", 00:09:19.239 "traddr": "0000:00:11.0", 00:09:19.239 "name": "Nvme1" 00:09:19.239 }, 00:09:19.239 "method": "bdev_nvme_attach_controller" 00:09:19.239 }, 00:09:19.239 { 00:09:19.239 "method": "bdev_wait_for_examine" 00:09:19.239 } 00:09:19.239 ] 00:09:19.239 } 00:09:19.239 ] 00:09:19.239 } 00:09:19.498 [2024-07-11 05:55:35.170844] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:19.498 [2024-07-11 05:55:35.171020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66095 ] 00:09:19.498 [2024-07-11 05:55:35.337530] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.756 [2024-07-11 05:55:35.505184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.015 [2024-07-11 05:55:35.682506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:21.406  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:21.406 00:09:21.406 05:55:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:21.406 05:55:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:21.406 05:55:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:21.406 05:55:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:21.406 05:55:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:21.406 05:55:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:21.406 05:55:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:21.406 { 00:09:21.406 "subsystems": [ 00:09:21.406 { 00:09:21.406 "subsystem": "bdev", 00:09:21.406 "config": [ 00:09:21.406 { 00:09:21.406 "params": { 00:09:21.406 "trtype": "pcie", 00:09:21.406 "traddr": "0000:00:10.0", 00:09:21.406 "name": "Nvme0" 00:09:21.406 }, 00:09:21.406 "method": "bdev_nvme_attach_controller" 00:09:21.406 }, 00:09:21.406 { 00:09:21.406 "params": { 00:09:21.406 "trtype": "pcie", 00:09:21.406 "traddr": "0000:00:11.0", 00:09:21.406 "name": "Nvme1" 00:09:21.406 }, 00:09:21.406 "method": "bdev_nvme_attach_controller" 00:09:21.406 }, 00:09:21.406 { 00:09:21.406 "method": "bdev_wait_for_examine" 00:09:21.406 } 00:09:21.406 ] 00:09:21.406 } 00:09:21.406 ] 00:09:21.406 } 00:09:21.406 [2024-07-11 05:55:37.027841] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:21.406 [2024-07-11 05:55:37.028011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66129 ] 00:09:21.406 [2024-07-11 05:55:37.199312] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.665 [2024-07-11 05:55:37.361953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.666 [2024-07-11 05:55:37.528636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:22.863  Copying: 65/65 [MB] (average 1065 MBps) 00:09:22.863 00:09:22.863 05:55:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:22.863 05:55:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:22.863 05:55:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:22.863 05:55:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:22.863 { 00:09:22.863 "subsystems": [ 00:09:22.863 { 00:09:22.863 "subsystem": "bdev", 00:09:22.863 "config": [ 00:09:22.863 { 00:09:22.863 "params": { 00:09:22.863 "trtype": "pcie", 00:09:22.863 "traddr": "0000:00:10.0", 00:09:22.863 "name": "Nvme0" 00:09:22.863 }, 00:09:22.863 "method": "bdev_nvme_attach_controller" 00:09:22.863 }, 00:09:22.863 { 00:09:22.863 "params": { 00:09:22.863 "trtype": "pcie", 00:09:22.863 "traddr": "0000:00:11.0", 00:09:22.863 "name": "Nvme1" 00:09:22.863 }, 00:09:22.863 "method": "bdev_nvme_attach_controller" 00:09:22.863 }, 00:09:22.863 { 00:09:22.863 "method": "bdev_wait_for_examine" 00:09:22.863 } 00:09:22.863 ] 00:09:22.863 } 00:09:22.863 ] 00:09:22.863 } 00:09:22.863 [2024-07-11 05:55:38.749192] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:22.863 [2024-07-11 05:55:38.749323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66156 ] 00:09:23.122 [2024-07-11 05:55:38.908170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.381 [2024-07-11 05:55:39.068921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.381 [2024-07-11 05:55:39.233743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:24.577  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:24.577 00:09:24.577 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:24.577 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:24.577 00:09:24.577 real 0m7.132s 00:09:24.577 user 0m6.110s 00:09:24.577 sys 0m2.110s 00:09:24.577 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.577 ************************************ 00:09:24.577 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:24.577 END TEST dd_offset_magic 00:09:24.577 ************************************ 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:24.836 05:55:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:24.836 { 00:09:24.836 "subsystems": [ 00:09:24.836 { 00:09:24.836 "subsystem": "bdev", 00:09:24.836 "config": [ 00:09:24.836 { 00:09:24.836 "params": { 00:09:24.836 "trtype": "pcie", 00:09:24.836 "traddr": "0000:00:10.0", 00:09:24.836 "name": "Nvme0" 00:09:24.836 }, 00:09:24.836 "method": "bdev_nvme_attach_controller" 00:09:24.836 }, 00:09:24.836 { 00:09:24.836 "params": { 00:09:24.836 "trtype": "pcie", 00:09:24.836 "traddr": "0000:00:11.0", 00:09:24.836 "name": "Nvme1" 00:09:24.836 }, 00:09:24.836 "method": "bdev_nvme_attach_controller" 00:09:24.836 }, 00:09:24.836 { 00:09:24.836 "method": "bdev_wait_for_examine" 00:09:24.836 } 00:09:24.836 ] 00:09:24.836 } 00:09:24.836 ] 00:09:24.836 } 00:09:24.836 [2024-07-11 05:55:40.617054] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:24.836 [2024-07-11 05:55:40.617212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66205 ] 00:09:25.095 [2024-07-11 05:55:40.776563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.095 [2024-07-11 05:55:40.933828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.354 [2024-07-11 05:55:41.112361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:26.551  Copying: 5120/5120 [kB] (average 1250 MBps) 00:09:26.551 00:09:26.551 05:55:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:26.551 05:55:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:26.551 05:55:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:26.551 05:55:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:26.551 05:55:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:26.551 05:55:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:26.551 05:55:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:26.551 05:55:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:26.551 05:55:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:26.551 05:55:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:26.551 { 00:09:26.551 "subsystems": [ 00:09:26.551 { 00:09:26.551 "subsystem": "bdev", 00:09:26.551 "config": [ 00:09:26.551 { 00:09:26.551 "params": { 00:09:26.551 "trtype": "pcie", 00:09:26.551 "traddr": "0000:00:10.0", 00:09:26.551 "name": "Nvme0" 00:09:26.551 }, 00:09:26.551 "method": "bdev_nvme_attach_controller" 00:09:26.551 }, 00:09:26.551 { 00:09:26.551 "params": { 00:09:26.551 "trtype": "pcie", 00:09:26.551 "traddr": "0000:00:11.0", 00:09:26.551 "name": "Nvme1" 00:09:26.551 }, 00:09:26.551 "method": "bdev_nvme_attach_controller" 00:09:26.551 }, 00:09:26.551 { 00:09:26.551 "method": "bdev_wait_for_examine" 00:09:26.551 } 00:09:26.551 ] 00:09:26.551 } 00:09:26.551 ] 00:09:26.551 } 00:09:26.551 [2024-07-11 05:55:42.470598] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:26.551 [2024-07-11 05:55:42.470796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66232 ] 00:09:26.810 [2024-07-11 05:55:42.642707] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.069 [2024-07-11 05:55:42.809254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.069 [2024-07-11 05:55:42.961791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.282  Copying: 5120/5120 [kB] (average 833 MBps) 00:09:28.282 00:09:28.541 05:55:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:28.541 00:09:28.541 real 0m15.883s 00:09:28.541 user 0m13.537s 00:09:28.541 sys 0m6.954s 00:09:28.541 05:55:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.541 05:55:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:28.541 ************************************ 00:09:28.541 END TEST spdk_dd_bdev_to_bdev 00:09:28.541 ************************************ 00:09:28.541 05:55:44 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:09:28.541 05:55:44 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:28.541 05:55:44 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:28.541 05:55:44 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:28.541 05:55:44 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.541 05:55:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:28.541 ************************************ 00:09:28.541 START TEST spdk_dd_uring 00:09:28.541 ************************************ 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:28.541 * Looking for test storage... 00:09:28.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:28.541 ************************************ 00:09:28.541 START TEST dd_uring_copy 00:09:28.541 ************************************ 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=4uhz7y7up4oh7w8bcrk8oxujicte6vpfkoyiqykqbonb12c97c2gz93fq60r86eghpbvcncqiua3ebn1al9p00v10a7go8sf5fkc1mumtmlkkeg23vyoamsuwejioy7vnl0u515w9prh9o9w6av56datoro0btbvy0nvtm9ztodoorhi3kc7gu1q4mi6nob11kiwpmzkzn21hiik63fmz5g354dv6c0xb86wftp5jcb2at3vh6uv2ims3isu2xpocx8dstuzy7iffh33n90scht54fl19bhhkhw3klvvgf2mxyh3zkbdg6kcaomaboii199fduemyxtebp3fc70kb6ygiakxut4a8x1c6qa9uidt80fdkw372r53eirw17tmcku38ct4ebr7x8zrrapymrfuosb8m5rkik6fi0ctjdahrn6ps1c0whq8hf1qk9e22h1hscld55sda40j1hsl0zy0zm53z9uttujoi2rh0fgw04yw1ijsa27p4n1i342lkvt1hlek4d9f7r6iekqmvny1jwgk1w5l5y72sodsinhawdp1cic0atgdpjlkzqo2523debibi4j4yh0pj8iy8edsyirylmgjdcafl6hyn034jiqt5l3rm65fc0hypplaluugp7x6oemkydhzipps2ezariknwc0zn325uf95vlanrngyd75ga1jggu200re4psys2qm1iw7ps0g1tksbfatyofcg9i4iz98rbmuetdswb7f0klhl0dpmjum81g26sygmyrcprtlw1tuy0raquctqntf6yt0r5yriinh3w55w6fmsj8a87udd8bdrs1yn55ech8iclmkhpsx0w7rqu1gmssakz5vrtaodyrb0nlu7znxgv3qz26ry7x9vq2lrucky8cgkqf2a7qzap4fpt3f5tk80mngc35zh20qfgqa8oi0n7cclflyk13ixdqkq3emaa35kr9exm0x3h6n1ir07kkw9djxm25tnasmf5r6wh08pxzv7kjt75f1kdcjy 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 4uhz7y7up4oh7w8bcrk8oxujicte6vpfkoyiqykqbonb12c97c2gz93fq60r86eghpbvcncqiua3ebn1al9p00v10a7go8sf5fkc1mumtmlkkeg23vyoamsuwejioy7vnl0u515w9prh9o9w6av56datoro0btbvy0nvtm9ztodoorhi3kc7gu1q4mi6nob11kiwpmzkzn21hiik63fmz5g354dv6c0xb86wftp5jcb2at3vh6uv2ims3isu2xpocx8dstuzy7iffh33n90scht54fl19bhhkhw3klvvgf2mxyh3zkbdg6kcaomaboii199fduemyxtebp3fc70kb6ygiakxut4a8x1c6qa9uidt80fdkw372r53eirw17tmcku38ct4ebr7x8zrrapymrfuosb8m5rkik6fi0ctjdahrn6ps1c0whq8hf1qk9e22h1hscld55sda40j1hsl0zy0zm53z9uttujoi2rh0fgw04yw1ijsa27p4n1i342lkvt1hlek4d9f7r6iekqmvny1jwgk1w5l5y72sodsinhawdp1cic0atgdpjlkzqo2523debibi4j4yh0pj8iy8edsyirylmgjdcafl6hyn034jiqt5l3rm65fc0hypplaluugp7x6oemkydhzipps2ezariknwc0zn325uf95vlanrngyd75ga1jggu200re4psys2qm1iw7ps0g1tksbfatyofcg9i4iz98rbmuetdswb7f0klhl0dpmjum81g26sygmyrcprtlw1tuy0raquctqntf6yt0r5yriinh3w55w6fmsj8a87udd8bdrs1yn55ech8iclmkhpsx0w7rqu1gmssakz5vrtaodyrb0nlu7znxgv3qz26ry7x9vq2lrucky8cgkqf2a7qzap4fpt3f5tk80mngc35zh20qfgqa8oi0n7cclflyk13ixdqkq3emaa35kr9exm0x3h6n1ir07kkw9djxm25tnasmf5r6wh08pxzv7kjt75f1kdcjy 00:09:28.541 05:55:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:28.800 [2024-07-11 05:55:44.507464] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:28.800 [2024-07-11 05:55:44.507659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66314 ] 00:09:28.800 [2024-07-11 05:55:44.679614] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.059 [2024-07-11 05:55:44.878225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.319 [2024-07-11 05:55:45.030876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:32.161  Copying: 511/511 [MB] (average 1868 MBps) 00:09:32.161 00:09:32.161 05:55:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:32.161 05:55:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:32.161 05:55:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:32.161 05:55:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:32.161 { 00:09:32.161 "subsystems": [ 00:09:32.161 { 00:09:32.161 "subsystem": "bdev", 00:09:32.161 "config": [ 00:09:32.161 { 00:09:32.161 "params": { 00:09:32.161 "block_size": 512, 00:09:32.161 "num_blocks": 1048576, 00:09:32.161 "name": "malloc0" 00:09:32.161 }, 00:09:32.161 "method": "bdev_malloc_create" 00:09:32.161 }, 00:09:32.161 { 00:09:32.161 "params": { 00:09:32.161 "filename": "/dev/zram1", 00:09:32.161 "name": "uring0" 00:09:32.161 }, 00:09:32.161 "method": "bdev_uring_create" 00:09:32.161 }, 00:09:32.161 { 00:09:32.161 "method": "bdev_wait_for_examine" 00:09:32.161 } 00:09:32.161 ] 00:09:32.161 } 00:09:32.161 ] 00:09:32.161 } 00:09:32.161 [2024-07-11 05:55:47.896510] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:32.161 [2024-07-11 05:55:47.896715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66361 ] 00:09:32.161 [2024-07-11 05:55:48.065952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.420 [2024-07-11 05:55:48.238914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.679 [2024-07-11 05:55:48.421495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:38.179  Copying: 175/512 [MB] (175 MBps) Copying: 354/512 [MB] (178 MBps) Copying: 512/512 [MB] (average 176 MBps) 00:09:38.179 00:09:38.179 05:55:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:38.179 05:55:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:38.179 05:55:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:38.179 05:55:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:38.436 { 00:09:38.436 "subsystems": [ 00:09:38.436 { 00:09:38.436 "subsystem": "bdev", 00:09:38.436 "config": [ 00:09:38.436 { 00:09:38.436 "params": { 00:09:38.436 "block_size": 512, 00:09:38.436 "num_blocks": 1048576, 00:09:38.437 "name": "malloc0" 00:09:38.437 }, 00:09:38.437 "method": "bdev_malloc_create" 00:09:38.437 }, 00:09:38.437 { 00:09:38.437 "params": { 00:09:38.437 "filename": "/dev/zram1", 00:09:38.437 "name": "uring0" 00:09:38.437 }, 00:09:38.437 "method": "bdev_uring_create" 00:09:38.437 }, 00:09:38.437 { 00:09:38.437 "method": "bdev_wait_for_examine" 00:09:38.437 } 00:09:38.437 ] 00:09:38.437 } 00:09:38.437 ] 00:09:38.437 } 00:09:38.437 [2024-07-11 05:55:54.200014] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:38.437 [2024-07-11 05:55:54.200212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66444 ] 00:09:38.694 [2024-07-11 05:55:54.373699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.695 [2024-07-11 05:55:54.532087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.952 [2024-07-11 05:55:54.725501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:45.482  Copying: 149/512 [MB] (149 MBps) Copying: 282/512 [MB] (132 MBps) Copying: 429/512 [MB] (147 MBps) Copying: 512/512 [MB] (average 143 MBps) 00:09:45.482 00:09:45.482 05:56:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:45.483 05:56:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 4uhz7y7up4oh7w8bcrk8oxujicte6vpfkoyiqykqbonb12c97c2gz93fq60r86eghpbvcncqiua3ebn1al9p00v10a7go8sf5fkc1mumtmlkkeg23vyoamsuwejioy7vnl0u515w9prh9o9w6av56datoro0btbvy0nvtm9ztodoorhi3kc7gu1q4mi6nob11kiwpmzkzn21hiik63fmz5g354dv6c0xb86wftp5jcb2at3vh6uv2ims3isu2xpocx8dstuzy7iffh33n90scht54fl19bhhkhw3klvvgf2mxyh3zkbdg6kcaomaboii199fduemyxtebp3fc70kb6ygiakxut4a8x1c6qa9uidt80fdkw372r53eirw17tmcku38ct4ebr7x8zrrapymrfuosb8m5rkik6fi0ctjdahrn6ps1c0whq8hf1qk9e22h1hscld55sda40j1hsl0zy0zm53z9uttujoi2rh0fgw04yw1ijsa27p4n1i342lkvt1hlek4d9f7r6iekqmvny1jwgk1w5l5y72sodsinhawdp1cic0atgdpjlkzqo2523debibi4j4yh0pj8iy8edsyirylmgjdcafl6hyn034jiqt5l3rm65fc0hypplaluugp7x6oemkydhzipps2ezariknwc0zn325uf95vlanrngyd75ga1jggu200re4psys2qm1iw7ps0g1tksbfatyofcg9i4iz98rbmuetdswb7f0klhl0dpmjum81g26sygmyrcprtlw1tuy0raquctqntf6yt0r5yriinh3w55w6fmsj8a87udd8bdrs1yn55ech8iclmkhpsx0w7rqu1gmssakz5vrtaodyrb0nlu7znxgv3qz26ry7x9vq2lrucky8cgkqf2a7qzap4fpt3f5tk80mngc35zh20qfgqa8oi0n7cclflyk13ixdqkq3emaa35kr9exm0x3h6n1ir07kkw9djxm25tnasmf5r6wh08pxzv7kjt75f1kdcjy == \4\u\h\z\7\y\7\u\p\4\o\h\7\w\8\b\c\r\k\8\o\x\u\j\i\c\t\e\6\v\p\f\k\o\y\i\q\y\k\q\b\o\n\b\1\2\c\9\7\c\2\g\z\9\3\f\q\6\0\r\8\6\e\g\h\p\b\v\c\n\c\q\i\u\a\3\e\b\n\1\a\l\9\p\0\0\v\1\0\a\7\g\o\8\s\f\5\f\k\c\1\m\u\m\t\m\l\k\k\e\g\2\3\v\y\o\a\m\s\u\w\e\j\i\o\y\7\v\n\l\0\u\5\1\5\w\9\p\r\h\9\o\9\w\6\a\v\5\6\d\a\t\o\r\o\0\b\t\b\v\y\0\n\v\t\m\9\z\t\o\d\o\o\r\h\i\3\k\c\7\g\u\1\q\4\m\i\6\n\o\b\1\1\k\i\w\p\m\z\k\z\n\2\1\h\i\i\k\6\3\f\m\z\5\g\3\5\4\d\v\6\c\0\x\b\8\6\w\f\t\p\5\j\c\b\2\a\t\3\v\h\6\u\v\2\i\m\s\3\i\s\u\2\x\p\o\c\x\8\d\s\t\u\z\y\7\i\f\f\h\3\3\n\9\0\s\c\h\t\5\4\f\l\1\9\b\h\h\k\h\w\3\k\l\v\v\g\f\2\m\x\y\h\3\z\k\b\d\g\6\k\c\a\o\m\a\b\o\i\i\1\9\9\f\d\u\e\m\y\x\t\e\b\p\3\f\c\7\0\k\b\6\y\g\i\a\k\x\u\t\4\a\8\x\1\c\6\q\a\9\u\i\d\t\8\0\f\d\k\w\3\7\2\r\5\3\e\i\r\w\1\7\t\m\c\k\u\3\8\c\t\4\e\b\r\7\x\8\z\r\r\a\p\y\m\r\f\u\o\s\b\8\m\5\r\k\i\k\6\f\i\0\c\t\j\d\a\h\r\n\6\p\s\1\c\0\w\h\q\8\h\f\1\q\k\9\e\2\2\h\1\h\s\c\l\d\5\5\s\d\a\4\0\j\1\h\s\l\0\z\y\0\z\m\5\3\z\9\u\t\t\u\j\o\i\2\r\h\0\f\g\w\0\4\y\w\1\i\j\s\a\2\7\p\4\n\1\i\3\4\2\l\k\v\t\1\h\l\e\k\4\d\9\f\7\r\6\i\e\k\q\m\v\n\y\1\j\w\g\k\1\w\5\l\5\y\7\2\s\o\d\s\i\n\h\a\w\d\p\1\c\i\c\0\a\t\g\d\p\j\l\k\z\q\o\2\5\2\3\d\e\b\i\b\i\4\j\4\y\h\0\p\j\8\i\y\8\e\d\s\y\i\r\y\l\m\g\j\d\c\a\f\l\6\h\y\n\0\3\4\j\i\q\t\5\l\3\r\m\6\5\f\c\0\h\y\p\p\l\a\l\u\u\g\p\7\x\6\o\e\m\k\y\d\h\z\i\p\p\s\2\e\z\a\r\i\k\n\w\c\0\z\n\3\2\5\u\f\9\5\v\l\a\n\r\n\g\y\d\7\5\g\a\1\j\g\g\u\2\0\0\r\e\4\p\s\y\s\2\q\m\1\i\w\7\p\s\0\g\1\t\k\s\b\f\a\t\y\o\f\c\g\9\i\4\i\z\9\8\r\b\m\u\e\t\d\s\w\b\7\f\0\k\l\h\l\0\d\p\m\j\u\m\8\1\g\2\6\s\y\g\m\y\r\c\p\r\t\l\w\1\t\u\y\0\r\a\q\u\c\t\q\n\t\f\6\y\t\0\r\5\y\r\i\i\n\h\3\w\5\5\w\6\f\m\s\j\8\a\8\7\u\d\d\8\b\d\r\s\1\y\n\5\5\e\c\h\8\i\c\l\m\k\h\p\s\x\0\w\7\r\q\u\1\g\m\s\s\a\k\z\5\v\r\t\a\o\d\y\r\b\0\n\l\u\7\z\n\x\g\v\3\q\z\2\6\r\y\7\x\9\v\q\2\l\r\u\c\k\y\8\c\g\k\q\f\2\a\7\q\z\a\p\4\f\p\t\3\f\5\t\k\8\0\m\n\g\c\3\5\z\h\2\0\q\f\g\q\a\8\o\i\0\n\7\c\c\l\f\l\y\k\1\3\i\x\d\q\k\q\3\e\m\a\a\3\5\k\r\9\e\x\m\0\x\3\h\6\n\1\i\r\0\7\k\k\w\9\d\j\x\m\2\5\t\n\a\s\m\f\5\r\6\w\h\0\8\p\x\z\v\7\k\j\t\7\5\f\1\k\d\c\j\y ]] 00:09:45.483 05:56:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:45.483 05:56:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 4uhz7y7up4oh7w8bcrk8oxujicte6vpfkoyiqykqbonb12c97c2gz93fq60r86eghpbvcncqiua3ebn1al9p00v10a7go8sf5fkc1mumtmlkkeg23vyoamsuwejioy7vnl0u515w9prh9o9w6av56datoro0btbvy0nvtm9ztodoorhi3kc7gu1q4mi6nob11kiwpmzkzn21hiik63fmz5g354dv6c0xb86wftp5jcb2at3vh6uv2ims3isu2xpocx8dstuzy7iffh33n90scht54fl19bhhkhw3klvvgf2mxyh3zkbdg6kcaomaboii199fduemyxtebp3fc70kb6ygiakxut4a8x1c6qa9uidt80fdkw372r53eirw17tmcku38ct4ebr7x8zrrapymrfuosb8m5rkik6fi0ctjdahrn6ps1c0whq8hf1qk9e22h1hscld55sda40j1hsl0zy0zm53z9uttujoi2rh0fgw04yw1ijsa27p4n1i342lkvt1hlek4d9f7r6iekqmvny1jwgk1w5l5y72sodsinhawdp1cic0atgdpjlkzqo2523debibi4j4yh0pj8iy8edsyirylmgjdcafl6hyn034jiqt5l3rm65fc0hypplaluugp7x6oemkydhzipps2ezariknwc0zn325uf95vlanrngyd75ga1jggu200re4psys2qm1iw7ps0g1tksbfatyofcg9i4iz98rbmuetdswb7f0klhl0dpmjum81g26sygmyrcprtlw1tuy0raquctqntf6yt0r5yriinh3w55w6fmsj8a87udd8bdrs1yn55ech8iclmkhpsx0w7rqu1gmssakz5vrtaodyrb0nlu7znxgv3qz26ry7x9vq2lrucky8cgkqf2a7qzap4fpt3f5tk80mngc35zh20qfgqa8oi0n7cclflyk13ixdqkq3emaa35kr9exm0x3h6n1ir07kkw9djxm25tnasmf5r6wh08pxzv7kjt75f1kdcjy == \4\u\h\z\7\y\7\u\p\4\o\h\7\w\8\b\c\r\k\8\o\x\u\j\i\c\t\e\6\v\p\f\k\o\y\i\q\y\k\q\b\o\n\b\1\2\c\9\7\c\2\g\z\9\3\f\q\6\0\r\8\6\e\g\h\p\b\v\c\n\c\q\i\u\a\3\e\b\n\1\a\l\9\p\0\0\v\1\0\a\7\g\o\8\s\f\5\f\k\c\1\m\u\m\t\m\l\k\k\e\g\2\3\v\y\o\a\m\s\u\w\e\j\i\o\y\7\v\n\l\0\u\5\1\5\w\9\p\r\h\9\o\9\w\6\a\v\5\6\d\a\t\o\r\o\0\b\t\b\v\y\0\n\v\t\m\9\z\t\o\d\o\o\r\h\i\3\k\c\7\g\u\1\q\4\m\i\6\n\o\b\1\1\k\i\w\p\m\z\k\z\n\2\1\h\i\i\k\6\3\f\m\z\5\g\3\5\4\d\v\6\c\0\x\b\8\6\w\f\t\p\5\j\c\b\2\a\t\3\v\h\6\u\v\2\i\m\s\3\i\s\u\2\x\p\o\c\x\8\d\s\t\u\z\y\7\i\f\f\h\3\3\n\9\0\s\c\h\t\5\4\f\l\1\9\b\h\h\k\h\w\3\k\l\v\v\g\f\2\m\x\y\h\3\z\k\b\d\g\6\k\c\a\o\m\a\b\o\i\i\1\9\9\f\d\u\e\m\y\x\t\e\b\p\3\f\c\7\0\k\b\6\y\g\i\a\k\x\u\t\4\a\8\x\1\c\6\q\a\9\u\i\d\t\8\0\f\d\k\w\3\7\2\r\5\3\e\i\r\w\1\7\t\m\c\k\u\3\8\c\t\4\e\b\r\7\x\8\z\r\r\a\p\y\m\r\f\u\o\s\b\8\m\5\r\k\i\k\6\f\i\0\c\t\j\d\a\h\r\n\6\p\s\1\c\0\w\h\q\8\h\f\1\q\k\9\e\2\2\h\1\h\s\c\l\d\5\5\s\d\a\4\0\j\1\h\s\l\0\z\y\0\z\m\5\3\z\9\u\t\t\u\j\o\i\2\r\h\0\f\g\w\0\4\y\w\1\i\j\s\a\2\7\p\4\n\1\i\3\4\2\l\k\v\t\1\h\l\e\k\4\d\9\f\7\r\6\i\e\k\q\m\v\n\y\1\j\w\g\k\1\w\5\l\5\y\7\2\s\o\d\s\i\n\h\a\w\d\p\1\c\i\c\0\a\t\g\d\p\j\l\k\z\q\o\2\5\2\3\d\e\b\i\b\i\4\j\4\y\h\0\p\j\8\i\y\8\e\d\s\y\i\r\y\l\m\g\j\d\c\a\f\l\6\h\y\n\0\3\4\j\i\q\t\5\l\3\r\m\6\5\f\c\0\h\y\p\p\l\a\l\u\u\g\p\7\x\6\o\e\m\k\y\d\h\z\i\p\p\s\2\e\z\a\r\i\k\n\w\c\0\z\n\3\2\5\u\f\9\5\v\l\a\n\r\n\g\y\d\7\5\g\a\1\j\g\g\u\2\0\0\r\e\4\p\s\y\s\2\q\m\1\i\w\7\p\s\0\g\1\t\k\s\b\f\a\t\y\o\f\c\g\9\i\4\i\z\9\8\r\b\m\u\e\t\d\s\w\b\7\f\0\k\l\h\l\0\d\p\m\j\u\m\8\1\g\2\6\s\y\g\m\y\r\c\p\r\t\l\w\1\t\u\y\0\r\a\q\u\c\t\q\n\t\f\6\y\t\0\r\5\y\r\i\i\n\h\3\w\5\5\w\6\f\m\s\j\8\a\8\7\u\d\d\8\b\d\r\s\1\y\n\5\5\e\c\h\8\i\c\l\m\k\h\p\s\x\0\w\7\r\q\u\1\g\m\s\s\a\k\z\5\v\r\t\a\o\d\y\r\b\0\n\l\u\7\z\n\x\g\v\3\q\z\2\6\r\y\7\x\9\v\q\2\l\r\u\c\k\y\8\c\g\k\q\f\2\a\7\q\z\a\p\4\f\p\t\3\f\5\t\k\8\0\m\n\g\c\3\5\z\h\2\0\q\f\g\q\a\8\o\i\0\n\7\c\c\l\f\l\y\k\1\3\i\x\d\q\k\q\3\e\m\a\a\3\5\k\r\9\e\x\m\0\x\3\h\6\n\1\i\r\0\7\k\k\w\9\d\j\x\m\2\5\t\n\a\s\m\f\5\r\6\w\h\0\8\p\x\z\v\7\k\j\t\7\5\f\1\k\d\c\j\y ]] 00:09:45.483 05:56:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:45.483 05:56:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:45.483 05:56:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:45.483 05:56:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:45.483 05:56:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:45.483 { 00:09:45.483 "subsystems": [ 00:09:45.483 { 00:09:45.483 "subsystem": "bdev", 00:09:45.483 "config": [ 00:09:45.483 { 00:09:45.483 "params": { 00:09:45.483 "block_size": 512, 00:09:45.483 "num_blocks": 1048576, 00:09:45.483 "name": "malloc0" 00:09:45.483 }, 00:09:45.483 "method": "bdev_malloc_create" 00:09:45.483 }, 00:09:45.483 { 00:09:45.483 "params": { 00:09:45.483 "filename": "/dev/zram1", 00:09:45.483 "name": "uring0" 00:09:45.483 }, 00:09:45.483 "method": "bdev_uring_create" 00:09:45.483 }, 00:09:45.483 { 00:09:45.483 "method": "bdev_wait_for_examine" 00:09:45.483 } 00:09:45.483 ] 00:09:45.483 } 00:09:45.483 ] 00:09:45.483 } 00:09:45.483 [2024-07-11 05:56:01.362728] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:45.483 [2024-07-11 05:56:01.362911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66548 ] 00:09:45.742 [2024-07-11 05:56:01.536124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.001 [2024-07-11 05:56:01.774811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.260 [2024-07-11 05:56:01.944026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:52.412  Copying: 146/512 [MB] (146 MBps) Copying: 294/512 [MB] (148 MBps) Copying: 403/512 [MB] (108 MBps) Copying: 512/512 [MB] (average 137 MBps) 00:09:52.412 00:09:52.412 05:56:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:52.412 05:56:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:52.412 05:56:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:52.412 05:56:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:52.412 05:56:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:52.412 05:56:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:52.412 05:56:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:52.412 05:56:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 { 00:09:52.412 "subsystems": [ 00:09:52.412 { 00:09:52.412 "subsystem": "bdev", 00:09:52.412 "config": [ 00:09:52.412 { 00:09:52.412 "params": { 00:09:52.412 "block_size": 512, 00:09:52.412 "num_blocks": 1048576, 00:09:52.412 "name": "malloc0" 00:09:52.412 }, 00:09:52.412 "method": "bdev_malloc_create" 00:09:52.412 }, 00:09:52.412 { 00:09:52.412 "params": { 00:09:52.412 "filename": "/dev/zram1", 00:09:52.412 "name": "uring0" 00:09:52.413 }, 00:09:52.413 "method": "bdev_uring_create" 00:09:52.413 }, 00:09:52.413 { 00:09:52.413 "params": { 00:09:52.413 "name": "uring0" 00:09:52.413 }, 00:09:52.413 "method": "bdev_uring_delete" 00:09:52.413 }, 00:09:52.413 { 00:09:52.413 "method": "bdev_wait_for_examine" 00:09:52.413 } 00:09:52.413 ] 00:09:52.413 } 00:09:52.413 ] 00:09:52.413 } 00:09:52.413 [2024-07-11 05:56:08.289697] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:52.413 [2024-07-11 05:56:08.289865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66633 ] 00:09:52.671 [2024-07-11 05:56:08.459617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.929 [2024-07-11 05:56:08.669462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.929 [2024-07-11 05:56:08.816881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:55.402  Copying: 0/0 [B] (average 0 Bps) 00:09:55.402 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:55.402 05:56:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:55.661 { 00:09:55.661 "subsystems": [ 00:09:55.661 { 00:09:55.661 "subsystem": "bdev", 00:09:55.661 "config": [ 00:09:55.661 { 00:09:55.661 "params": { 00:09:55.661 "block_size": 512, 00:09:55.661 "num_blocks": 1048576, 00:09:55.661 "name": "malloc0" 00:09:55.661 }, 00:09:55.661 "method": "bdev_malloc_create" 00:09:55.661 }, 00:09:55.661 { 00:09:55.661 "params": { 00:09:55.661 "filename": "/dev/zram1", 00:09:55.661 "name": "uring0" 00:09:55.661 }, 00:09:55.661 "method": "bdev_uring_create" 00:09:55.661 }, 00:09:55.661 { 00:09:55.661 "params": { 00:09:55.661 "name": "uring0" 00:09:55.661 }, 00:09:55.661 "method": "bdev_uring_delete" 00:09:55.661 }, 00:09:55.661 { 00:09:55.661 "method": "bdev_wait_for_examine" 00:09:55.661 } 00:09:55.661 ] 00:09:55.661 } 00:09:55.661 ] 00:09:55.661 } 00:09:55.661 [2024-07-11 05:56:11.361469] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:55.661 [2024-07-11 05:56:11.361620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66685 ] 00:09:55.661 [2024-07-11 05:56:11.509825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.919 [2024-07-11 05:56:11.669826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.919 [2024-07-11 05:56:11.815500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:56.487 [2024-07-11 05:56:12.375687] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:56.487 [2024-07-11 05:56:12.375808] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:56.487 [2024-07-11 05:56:12.375827] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:56.487 [2024-07-11 05:56:12.375846] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:58.389 [2024-07-11 05:56:13.941088] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:09:58.389 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:09:58.648 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:58.648 ************************************ 00:09:58.648 END TEST dd_uring_copy 00:09:58.648 ************************************ 00:09:58.648 00:09:58.648 real 0m30.127s 00:09:58.648 user 0m24.607s 00:09:58.648 sys 0m16.118s 00:09:58.648 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.648 05:56:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:58.648 05:56:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:09:58.648 00:09:58.648 real 0m30.264s 00:09:58.648 user 0m24.660s 00:09:58.648 sys 0m16.198s 00:09:58.648 05:56:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.648 05:56:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:58.648 ************************************ 00:09:58.648 END TEST spdk_dd_uring 00:09:58.648 ************************************ 00:09:58.906 05:56:14 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:09:58.906 05:56:14 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:58.906 05:56:14 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:58.906 05:56:14 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.906 05:56:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:58.906 ************************************ 00:09:58.906 START TEST spdk_dd_sparse 00:09:58.906 ************************************ 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:58.906 * Looking for test storage... 00:09:58.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:58.906 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:58.907 1+0 records in 00:09:58.907 1+0 records out 00:09:58.907 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00667201 s, 629 MB/s 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:58.907 1+0 records in 00:09:58.907 1+0 records out 00:09:58.907 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00671962 s, 624 MB/s 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:58.907 1+0 records in 00:09:58.907 1+0 records out 00:09:58.907 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00368911 s, 1.1 GB/s 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:58.907 ************************************ 00:09:58.907 START TEST dd_sparse_file_to_file 00:09:58.907 ************************************ 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:58.907 05:56:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:58.907 { 00:09:58.907 "subsystems": [ 00:09:58.907 { 00:09:58.907 "subsystem": "bdev", 00:09:58.907 "config": [ 00:09:58.907 { 00:09:58.907 "params": { 00:09:58.907 "block_size": 4096, 00:09:58.907 "filename": "dd_sparse_aio_disk", 00:09:58.907 "name": "dd_aio" 00:09:58.907 }, 00:09:58.907 "method": "bdev_aio_create" 00:09:58.907 }, 00:09:58.907 { 00:09:58.907 "params": { 00:09:58.907 "lvs_name": "dd_lvstore", 00:09:58.907 "bdev_name": "dd_aio" 00:09:58.907 }, 00:09:58.907 "method": "bdev_lvol_create_lvstore" 00:09:58.907 }, 00:09:58.907 { 00:09:58.907 "method": "bdev_wait_for_examine" 00:09:58.907 } 00:09:58.907 ] 00:09:58.907 } 00:09:58.907 ] 00:09:58.907 } 00:09:58.907 [2024-07-11 05:56:14.823791] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:09:58.907 [2024-07-11 05:56:14.823952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66794 ] 00:09:59.166 [2024-07-11 05:56:14.972202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.424 [2024-07-11 05:56:15.115198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.424 [2024-07-11 05:56:15.266612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:00.620  Copying: 12/36 [MB] (average 1090 MBps) 00:10:00.620 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:00.620 00:10:00.620 real 0m1.670s 00:10:00.620 user 0m1.400s 00:10:00.620 sys 0m0.801s 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.620 ************************************ 00:10:00.620 END TEST dd_sparse_file_to_file 00:10:00.620 ************************************ 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:00.620 ************************************ 00:10:00.620 START TEST dd_sparse_file_to_bdev 00:10:00.620 ************************************ 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:00.620 05:56:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:00.620 { 00:10:00.620 "subsystems": [ 00:10:00.620 { 00:10:00.620 "subsystem": "bdev", 00:10:00.620 "config": [ 00:10:00.620 { 00:10:00.620 "params": { 00:10:00.620 "block_size": 4096, 00:10:00.620 "filename": "dd_sparse_aio_disk", 00:10:00.620 "name": "dd_aio" 00:10:00.620 }, 00:10:00.620 "method": "bdev_aio_create" 00:10:00.620 }, 00:10:00.620 { 00:10:00.620 "params": { 00:10:00.620 "lvs_name": "dd_lvstore", 00:10:00.620 "lvol_name": "dd_lvol", 00:10:00.620 "size_in_mib": 36, 00:10:00.620 "thin_provision": true 00:10:00.620 }, 00:10:00.620 "method": "bdev_lvol_create" 00:10:00.620 }, 00:10:00.620 { 00:10:00.620 "method": "bdev_wait_for_examine" 00:10:00.620 } 00:10:00.620 ] 00:10:00.620 } 00:10:00.620 ] 00:10:00.620 } 00:10:00.878 [2024-07-11 05:56:16.555290] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:00.878 [2024-07-11 05:56:16.555453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66854 ] 00:10:00.878 [2024-07-11 05:56:16.723387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.136 [2024-07-11 05:56:16.888101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.136 [2024-07-11 05:56:17.039030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:02.332  Copying: 12/36 [MB] (average 461 MBps) 00:10:02.332 00:10:02.332 00:10:02.332 real 0m1.723s 00:10:02.332 user 0m1.422s 00:10:02.332 sys 0m0.831s 00:10:02.332 ************************************ 00:10:02.332 END TEST dd_sparse_file_to_bdev 00:10:02.332 ************************************ 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:02.332 ************************************ 00:10:02.332 START TEST dd_sparse_bdev_to_file 00:10:02.332 ************************************ 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:02.332 05:56:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:02.590 { 00:10:02.590 "subsystems": [ 00:10:02.590 { 00:10:02.590 "subsystem": "bdev", 00:10:02.590 "config": [ 00:10:02.590 { 00:10:02.590 "params": { 00:10:02.590 "block_size": 4096, 00:10:02.590 "filename": "dd_sparse_aio_disk", 00:10:02.590 "name": "dd_aio" 00:10:02.590 }, 00:10:02.590 "method": "bdev_aio_create" 00:10:02.590 }, 00:10:02.590 { 00:10:02.590 "method": "bdev_wait_for_examine" 00:10:02.590 } 00:10:02.590 ] 00:10:02.590 } 00:10:02.590 ] 00:10:02.590 } 00:10:02.590 [2024-07-11 05:56:18.324648] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:02.590 [2024-07-11 05:56:18.325019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66893 ] 00:10:02.590 [2024-07-11 05:56:18.482882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.849 [2024-07-11 05:56:18.657441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.108 [2024-07-11 05:56:18.824167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:04.487  Copying: 12/36 [MB] (average 1000 MBps) 00:10:04.487 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:04.487 00:10:04.487 real 0m1.854s 00:10:04.487 user 0m1.567s 00:10:04.487 sys 0m0.907s 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.487 ************************************ 00:10:04.487 END TEST dd_sparse_bdev_to_file 00:10:04.487 ************************************ 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:10:04.487 ************************************ 00:10:04.487 END TEST spdk_dd_sparse 00:10:04.487 ************************************ 00:10:04.487 00:10:04.487 real 0m5.555s 00:10:04.487 user 0m4.497s 00:10:04.487 sys 0m2.725s 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.487 05:56:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:04.487 05:56:20 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:04.487 05:56:20 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:04.487 05:56:20 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:04.487 05:56:20 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.487 05:56:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:04.487 ************************************ 00:10:04.487 START TEST spdk_dd_negative 00:10:04.487 ************************************ 00:10:04.487 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:04.487 * Looking for test storage... 00:10:04.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:04.487 05:56:20 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.487 05:56:20 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.487 05:56:20 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:04.488 ************************************ 00:10:04.488 START TEST dd_invalid_arguments 00:10:04.488 ************************************ 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:04.488 05:56:20 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:04.488 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:10:04.488 00:10:04.488 CPU options: 00:10:04.488 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:10:04.488 (like [0,1,10]) 00:10:04.488 --lcores lcore to CPU mapping list. The list is in the format: 00:10:04.488 [<,lcores[@CPUs]>...] 00:10:04.488 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:10:04.488 Within the group, '-' is used for range separator, 00:10:04.488 ',' is used for single number separator. 00:10:04.488 '( )' can be omitted for single element group, 00:10:04.488 '@' can be omitted if cpus and lcores have the same value 00:10:04.488 --disable-cpumask-locks Disable CPU core lock files. 00:10:04.488 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:10:04.488 pollers in the app support interrupt mode) 00:10:04.488 -p, --main-core main (primary) core for DPDK 00:10:04.488 00:10:04.488 Configuration options: 00:10:04.488 -c, --config, --json JSON config file 00:10:04.488 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:10:04.488 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:10:04.488 --wait-for-rpc wait for RPCs to initialize subsystems 00:10:04.488 --rpcs-allowed comma-separated list of permitted RPCS 00:10:04.488 --json-ignore-init-errors don't exit on invalid config entry 00:10:04.488 00:10:04.488 Memory options: 00:10:04.488 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:10:04.488 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:10:04.488 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:10:04.488 -R, --huge-unlink unlink huge files after initialization 00:10:04.488 -n, --mem-channels number of memory channels used for DPDK 00:10:04.488 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:10:04.488 --msg-mempool-size global message memory pool size in count (default: 262143) 00:10:04.488 --no-huge run without using hugepages 00:10:04.488 -i, --shm-id shared memory ID (optional) 00:10:04.488 -g, --single-file-segments force creating just one hugetlbfs file 00:10:04.488 00:10:04.488 PCI options: 00:10:04.488 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:10:04.488 -B, --pci-blocked pci addr to block (can be used more than once) 00:10:04.488 -u, --no-pci disable PCI access 00:10:04.488 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:10:04.488 00:10:04.488 Log options: 00:10:04.488 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:10:04.488 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:10:04.488 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:10:04.488 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:10:04.488 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:10:04.488 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:10:04.488 nvme_auth, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, 00:10:04.488 sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, 00:10:04.488 vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, 00:10:04.488 vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, 00:10:04.488 vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, 00:10:04.488 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:10:04.488 --silence-noticelog disable notice level logging to stderr 00:10:04.488 00:10:04.488 Trace options: 00:10:04.488 --num-trace-entries number of trace entries for each core, must be power of 2, 00:10:04.488 setting 0 to disable trace (default 32768) 00:10:04.488 Tracepoints vary in size and can use more than one trace entry. 00:10:04.488 -e, --tpoint-group [: 128 )) 00:10:04.748 05:56:20 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:04.748 05:56:20 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:04.748 00:10:04.748 real 0m0.163s 00:10:04.748 user 0m0.082s 00:10:04.748 sys 0m0.079s 00:10:04.748 05:56:20 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.748 ************************************ 00:10:04.748 END TEST dd_double_input 00:10:04.748 ************************************ 00:10:04.748 05:56:20 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:05.008 ************************************ 00:10:05.008 START TEST dd_double_output 00:10:05.008 ************************************ 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:05.008 [2024-07-11 05:56:20.808092] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:05.008 00:10:05.008 real 0m0.158s 00:10:05.008 user 0m0.082s 00:10:05.008 sys 0m0.075s 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:10:05.008 ************************************ 00:10:05.008 END TEST dd_double_output 00:10:05.008 ************************************ 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:05.008 ************************************ 00:10:05.008 START TEST dd_no_input 00:10:05.008 ************************************ 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:05.008 05:56:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:05.267 [2024-07-11 05:56:20.993525] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:05.267 ************************************ 00:10:05.267 END TEST dd_no_input 00:10:05.267 ************************************ 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:05.267 00:10:05.267 real 0m0.131s 00:10:05.267 user 0m0.068s 00:10:05.267 sys 0m0.061s 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:05.267 ************************************ 00:10:05.267 START TEST dd_no_output 00:10:05.267 ************************************ 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:05.267 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:05.527 [2024-07-11 05:56:21.192554] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:10:05.527 ************************************ 00:10:05.527 END TEST dd_no_output 00:10:05.527 ************************************ 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:05.527 00:10:05.527 real 0m0.160s 00:10:05.527 user 0m0.086s 00:10:05.527 sys 0m0.072s 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:05.527 ************************************ 00:10:05.527 START TEST dd_wrong_blocksize 00:10:05.527 ************************************ 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:05.527 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:05.527 [2024-07-11 05:56:21.402428] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:05.786 00:10:05.786 real 0m0.159s 00:10:05.786 user 0m0.097s 00:10:05.786 sys 0m0.060s 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.786 ************************************ 00:10:05.786 END TEST dd_wrong_blocksize 00:10:05.786 ************************************ 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:05.786 ************************************ 00:10:05.786 START TEST dd_smaller_blocksize 00:10:05.786 ************************************ 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:05.786 05:56:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:05.786 [2024-07-11 05:56:21.618773] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:05.786 [2024-07-11 05:56:21.618952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67141 ] 00:10:06.045 [2024-07-11 05:56:21.792902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.304 [2024-07-11 05:56:22.006671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.304 [2024-07-11 05:56:22.171542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.873 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:10:06.873 [2024-07-11 05:56:22.556173] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:10:06.873 [2024-07-11 05:56:22.556268] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:07.443 [2024-07-11 05:56:23.185175] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:07.702 05:56:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:10:07.702 05:56:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:07.702 ************************************ 00:10:07.702 END TEST dd_smaller_blocksize 00:10:07.702 ************************************ 00:10:07.702 05:56:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:10:07.702 05:56:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:10:07.702 05:56:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:10:07.702 05:56:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:07.702 00:10:07.702 real 0m2.074s 00:10:07.702 user 0m1.534s 00:10:07.702 sys 0m0.426s 00:10:07.702 05:56:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.702 05:56:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:07.960 ************************************ 00:10:07.960 START TEST dd_invalid_count 00:10:07.960 ************************************ 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.960 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:07.961 [2024-07-11 05:56:23.736749] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:07.961 00:10:07.961 real 0m0.152s 00:10:07.961 user 0m0.089s 00:10:07.961 sys 0m0.061s 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.961 ************************************ 00:10:07.961 END TEST dd_invalid_count 00:10:07.961 ************************************ 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:07.961 ************************************ 00:10:07.961 START TEST dd_invalid_oflag 00:10:07.961 ************************************ 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:07.961 05:56:23 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:08.218 [2024-07-11 05:56:23.948285] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:08.218 00:10:08.218 real 0m0.161s 00:10:08.218 user 0m0.089s 00:10:08.218 sys 0m0.070s 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.218 ************************************ 00:10:08.218 END TEST dd_invalid_oflag 00:10:08.218 ************************************ 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.218 05:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:08.219 ************************************ 00:10:08.219 START TEST dd_invalid_iflag 00:10:08.219 ************************************ 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:08.219 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:08.477 [2024-07-11 05:56:24.151695] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:08.477 00:10:08.477 real 0m0.160s 00:10:08.477 user 0m0.094s 00:10:08.477 sys 0m0.065s 00:10:08.477 ************************************ 00:10:08.477 END TEST dd_invalid_iflag 00:10:08.477 ************************************ 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:08.477 ************************************ 00:10:08.477 START TEST dd_unknown_flag 00:10:08.477 ************************************ 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.477 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.478 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.478 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:08.478 05:56:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:08.478 [2024-07-11 05:56:24.370116] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:08.478 [2024-07-11 05:56:24.370283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67253 ] 00:10:08.736 [2024-07-11 05:56:24.543346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.995 [2024-07-11 05:56:24.773583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.254 [2024-07-11 05:56:24.972619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:09.254 [2024-07-11 05:56:25.054794] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:09.254 [2024-07-11 05:56:25.054877] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:09.254 [2024-07-11 05:56:25.054963] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:09.254 [2024-07-11 05:56:25.054981] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:09.254 [2024-07-11 05:56:25.055247] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:10:09.254 [2024-07-11 05:56:25.055284] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:09.254 [2024-07-11 05:56:25.055363] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:09.254 [2024-07-11 05:56:25.055378] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:10.188 [2024-07-11 05:56:25.765052] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:10.448 00:10:10.448 real 0m1.954s 00:10:10.448 user 0m1.628s 00:10:10.448 sys 0m0.218s 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:10:10.448 ************************************ 00:10:10.448 END TEST dd_unknown_flag 00:10:10.448 ************************************ 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:10.448 ************************************ 00:10:10.448 START TEST dd_invalid_json 00:10:10.448 ************************************ 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:10.448 05:56:26 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:10.707 [2024-07-11 05:56:26.369272] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:10.707 [2024-07-11 05:56:26.369456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67298 ] 00:10:10.707 [2024-07-11 05:56:26.543030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.988 [2024-07-11 05:56:26.770383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.988 [2024-07-11 05:56:26.770485] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:10:10.988 [2024-07-11 05:56:26.770523] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:10.988 [2024-07-11 05:56:26.770541] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:10.988 [2024-07-11 05:56:26.770632] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:10:11.557 ************************************ 00:10:11.557 END TEST dd_invalid_json 00:10:11.557 ************************************ 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:11.557 00:10:11.557 real 0m0.943s 00:10:11.557 user 0m0.707s 00:10:11.557 sys 0m0.131s 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:11.557 ************************************ 00:10:11.557 END TEST spdk_dd_negative 00:10:11.557 ************************************ 00:10:11.557 00:10:11.557 real 0m7.037s 00:10:11.557 user 0m4.827s 00:10:11.557 sys 0m1.814s 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.557 05:56:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:11.557 05:56:27 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:11.557 00:10:11.557 real 2m50.806s 00:10:11.557 user 2m19.718s 00:10:11.557 sys 0m58.533s 00:10:11.557 ************************************ 00:10:11.557 END TEST spdk_dd 00:10:11.557 ************************************ 00:10:11.557 05:56:27 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.557 05:56:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:11.557 05:56:27 -- common/autotest_common.sh@1142 -- # return 0 00:10:11.557 05:56:27 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:10:11.557 05:56:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:11.557 05:56:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:11.557 05:56:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.557 05:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:11.557 05:56:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:11.557 05:56:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:10:11.557 05:56:27 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:10:11.557 05:56:27 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:10:11.557 05:56:27 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:10:11.557 05:56:27 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:10:11.557 05:56:27 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:11.557 05:56:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.557 05:56:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.557 05:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:11.557 ************************************ 00:10:11.557 START TEST nvmf_tcp 00:10:11.557 ************************************ 00:10:11.557 05:56:27 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:11.557 * Looking for test storage... 00:10:11.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.557 05:56:27 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.557 05:56:27 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.557 05:56:27 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.557 05:56:27 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.557 05:56:27 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.557 05:56:27 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.557 05:56:27 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:10:11.557 05:56:27 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.557 05:56:27 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.558 05:56:27 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.558 05:56:27 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.558 05:56:27 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.558 05:56:27 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.558 05:56:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:11.558 05:56:27 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:10:11.558 05:56:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:10:11.558 05:56:27 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:11.558 05:56:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.558 05:56:27 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:10:11.558 05:56:27 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:11.558 05:56:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.558 05:56:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.558 05:56:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.817 ************************************ 00:10:11.817 START TEST nvmf_host_management 00:10:11.817 ************************************ 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:11.817 * Looking for test storage... 00:10:11.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.817 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:11.818 Cannot find device "nvmf_init_br" 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:11.818 Cannot find device "nvmf_tgt_br" 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.818 Cannot find device "nvmf_tgt_br2" 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:11.818 Cannot find device "nvmf_init_br" 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:11.818 Cannot find device "nvmf_tgt_br" 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:11.818 Cannot find device "nvmf_tgt_br2" 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:11.818 Cannot find device "nvmf_br" 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:11.818 Cannot find device "nvmf_init_if" 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:11.818 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:12.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:10:12.077 00:10:12.077 --- 10.0.0.2 ping statistics --- 00:10:12.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.077 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:12.077 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.077 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:10:12.077 00:10:12.077 --- 10.0.0.3 ping statistics --- 00:10:12.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.077 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:12.077 00:10:12.077 --- 10.0.0.1 ping statistics --- 00:10:12.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.077 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.077 05:56:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=67565 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 67565 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 67565 ']' 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.336 05:56:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.336 [2024-07-11 05:56:28.133463] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:12.336 [2024-07-11 05:56:28.133883] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.595 [2024-07-11 05:56:28.296678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.595 [2024-07-11 05:56:28.513446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.595 [2024-07-11 05:56:28.513694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.595 [2024-07-11 05:56:28.513855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.595 [2024-07-11 05:56:28.513924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.595 [2024-07-11 05:56:28.514066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.595 [2024-07-11 05:56:28.514767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.595 [2024-07-11 05:56:28.514878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.595 [2024-07-11 05:56:28.515017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.595 [2024-07-11 05:56:28.515028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:12.854 [2024-07-11 05:56:28.701965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.461 [2024-07-11 05:56:29.125309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:13.461 05:56:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.462 Malloc0 00:10:13.462 [2024-07-11 05:56:29.252481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=67619 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 67619 /var/tmp/bdevperf.sock 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 67619 ']' 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:13.462 { 00:10:13.462 "params": { 00:10:13.462 "name": "Nvme$subsystem", 00:10:13.462 "trtype": "$TEST_TRANSPORT", 00:10:13.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:13.462 "adrfam": "ipv4", 00:10:13.462 "trsvcid": "$NVMF_PORT", 00:10:13.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:13.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:13.462 "hdgst": ${hdgst:-false}, 00:10:13.462 "ddgst": ${ddgst:-false} 00:10:13.462 }, 00:10:13.462 "method": "bdev_nvme_attach_controller" 00:10:13.462 } 00:10:13.462 EOF 00:10:13.462 )") 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:13.462 05:56:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:13.462 "params": { 00:10:13.462 "name": "Nvme0", 00:10:13.462 "trtype": "tcp", 00:10:13.462 "traddr": "10.0.0.2", 00:10:13.462 "adrfam": "ipv4", 00:10:13.462 "trsvcid": "4420", 00:10:13.462 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:13.462 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:13.462 "hdgst": false, 00:10:13.462 "ddgst": false 00:10:13.462 }, 00:10:13.462 "method": "bdev_nvme_attach_controller" 00:10:13.462 }' 00:10:13.720 [2024-07-11 05:56:29.403888] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:13.720 [2024-07-11 05:56:29.404082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67619 ] 00:10:13.720 [2024-07-11 05:56:29.576189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.977 [2024-07-11 05:56:29.801472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.234 [2024-07-11 05:56:29.989788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:14.492 Running I/O for 10 seconds... 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.492 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.752 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.752 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=259 00:10:14.752 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 259 -ge 100 ']' 00:10:14.752 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:14.752 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:14.752 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:14.752 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:14.752 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.752 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.752 [2024-07-11 05:56:30.454849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.454926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.454946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.454961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.454973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.454987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.454999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.752 [2024-07-11 05:56:30.455199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same [2024-07-11 05:56:30.455502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nswith the state(5) to be set 00:10:14.753 id:0 cdw10:00000000 cdw11:00000000 00:10:14.753 [2024-07-11 05:56:30.455532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.455571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-11 05:56:30.455583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same id:0 cdw10:00000000 cdw11:00000000 00:10:14.753 with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.455650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.753 [2024-07-11 05:56:30.455667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.455679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.753 [2024-07-11 05:56:30.455693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same [2024-07-11 05:56:30.455705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:10:14.753 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.455726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:10:14.753 [2024-07-11 05:56:30.455875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.455900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.455930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.455946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.455963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.455976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.455992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.753 [2024-07-11 05:56:30.456421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.753 [2024-07-11 05:56:30.456437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.456984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.456998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.754 [2024-07-11 05:56:30.457805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.754 [2024-07-11 05:56:30.457818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.755 [2024-07-11 05:56:30.457833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.755 [2024-07-11 05:56:30.457847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.755 [2024-07-11 05:56:30.457863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.755 [2024-07-11 05:56:30.457876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.755 [2024-07-11 05:56:30.457891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.755 [2024-07-11 05:56:30.457904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.755 [2024-07-11 05:56:30.457920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.755 [2024-07-11 05:56:30.457934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.755 [2024-07-11 05:56:30.457947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:10:14.755 [2024-07-11 05:56:30.458208] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:10:14.755 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.755 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:14.755 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.755 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.755 [2024-07-11 05:56:30.459536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:14.755 task offset: 40960 on job bdev=Nvme0n1 fails 00:10:14.755 00:10:14.755 Latency(us) 00:10:14.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.755 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:14.755 Job: Nvme0n1 ended in about 0.29 seconds with error 00:10:14.755 Verification LBA range: start 0x0 length 0x400 00:10:14.755 Nvme0n1 : 0.29 1120.13 70.01 224.03 0.00 45479.21 4289.63 42896.29 00:10:14.755 =================================================================================================================== 00:10:14.755 Total : 1120.13 70.01 224.03 0.00 45479.21 4289.63 42896.29 00:10:14.755 [2024-07-11 05:56:30.464957] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:14.755 [2024-07-11 05:56:30.465116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:10:14.755 05:56:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.755 05:56:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:14.755 [2024-07-11 05:56:30.476351] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 67619 00:10:15.690 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (67619) - No such process 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:15.690 { 00:10:15.690 "params": { 00:10:15.690 "name": "Nvme$subsystem", 00:10:15.690 "trtype": "$TEST_TRANSPORT", 00:10:15.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.690 "adrfam": "ipv4", 00:10:15.690 "trsvcid": "$NVMF_PORT", 00:10:15.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.690 "hdgst": ${hdgst:-false}, 00:10:15.690 "ddgst": ${ddgst:-false} 00:10:15.690 }, 00:10:15.690 "method": "bdev_nvme_attach_controller" 00:10:15.690 } 00:10:15.690 EOF 00:10:15.690 )") 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:15.690 05:56:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:15.690 "params": { 00:10:15.690 "name": "Nvme0", 00:10:15.690 "trtype": "tcp", 00:10:15.690 "traddr": "10.0.0.2", 00:10:15.690 "adrfam": "ipv4", 00:10:15.690 "trsvcid": "4420", 00:10:15.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:15.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:15.690 "hdgst": false, 00:10:15.690 "ddgst": false 00:10:15.690 }, 00:10:15.690 "method": "bdev_nvme_attach_controller" 00:10:15.690 }' 00:10:15.690 [2024-07-11 05:56:31.577757] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:15.690 [2024-07-11 05:56:31.577940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67658 ] 00:10:15.949 [2024-07-11 05:56:31.749444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.208 [2024-07-11 05:56:31.934762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.208 [2024-07-11 05:56:32.125066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:16.467 Running I/O for 1 seconds... 00:10:17.845 00:10:17.845 Latency(us) 00:10:17.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.845 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:17.845 Verification LBA range: start 0x0 length 0x400 00:10:17.845 Nvme0n1 : 1.03 1367.58 85.47 0.00 0.00 45917.02 5510.98 41704.73 00:10:17.845 =================================================================================================================== 00:10:17.845 Total : 1367.58 85.47 0.00 0.00 45917.02 5510.98 41704.73 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:18.787 rmmod nvme_tcp 00:10:18.787 rmmod nvme_fabrics 00:10:18.787 rmmod nvme_keyring 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 67565 ']' 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 67565 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 67565 ']' 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 67565 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67565 00:10:18.787 killing process with pid 67565 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67565' 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 67565 00:10:18.787 05:56:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 67565 00:10:20.161 [2024-07-11 05:56:35.665060] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:20.161 00:10:20.161 real 0m8.293s 00:10:20.161 user 0m32.542s 00:10:20.161 sys 0m1.577s 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.161 05:56:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:20.161 ************************************ 00:10:20.161 END TEST nvmf_host_management 00:10:20.161 ************************************ 00:10:20.161 05:56:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:20.161 05:56:35 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:20.161 05:56:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:20.161 05:56:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.161 05:56:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.161 ************************************ 00:10:20.161 START TEST nvmf_lvol 00:10:20.161 ************************************ 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:20.161 * Looking for test storage... 00:10:20.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.161 05:56:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:20.162 Cannot find device "nvmf_tgt_br" 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.162 Cannot find device "nvmf_tgt_br2" 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:20.162 Cannot find device "nvmf_tgt_br" 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:10:20.162 05:56:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:20.162 Cannot find device "nvmf_tgt_br2" 00:10:20.162 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:10:20.162 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:20.162 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:20.162 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.162 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:20.162 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.162 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:20.162 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:20.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:10:20.420 00:10:20.420 --- 10.0.0.2 ping statistics --- 00:10:20.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.420 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:20.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:20.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:20.420 00:10:20.420 --- 10.0.0.3 ping statistics --- 00:10:20.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.420 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:20.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:10:20.420 00:10:20.420 --- 10.0.0.1 ping statistics --- 00:10:20.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.420 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=67906 00:10:20.420 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:20.421 05:56:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 67906 00:10:20.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.421 05:56:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 67906 ']' 00:10:20.421 05:56:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.421 05:56:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.421 05:56:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.421 05:56:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.421 05:56:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:20.679 [2024-07-11 05:56:36.406124] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:20.679 [2024-07-11 05:56:36.406299] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.679 [2024-07-11 05:56:36.579328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.938 [2024-07-11 05:56:36.809588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.938 [2024-07-11 05:56:36.809671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.938 [2024-07-11 05:56:36.809694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.938 [2024-07-11 05:56:36.809712] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.938 [2024-07-11 05:56:36.809726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.938 [2024-07-11 05:56:36.810352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.938 [2024-07-11 05:56:36.810487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.938 [2024-07-11 05:56:36.810490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.196 [2024-07-11 05:56:36.989256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:21.454 05:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.454 05:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:10:21.454 05:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.454 05:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.454 05:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:21.454 05:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.454 05:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.712 [2024-07-11 05:56:37.513222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.712 05:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.970 05:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:21.970 05:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.229 05:56:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:22.229 05:56:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:22.795 05:56:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:22.795 05:56:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=05b46e30-b3bf-4bf3-9fe8-a7d2048bfb5a 00:10:22.795 05:56:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 05b46e30-b3bf-4bf3-9fe8-a7d2048bfb5a lvol 20 00:10:23.053 05:56:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=989ef2bb-c6dc-4ffd-ade4-5cc4c0f760bb 00:10:23.053 05:56:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:23.310 05:56:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 989ef2bb-c6dc-4ffd-ade4-5cc4c0f760bb 00:10:23.567 05:56:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:23.825 [2024-07-11 05:56:39.554207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.825 05:56:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:24.082 05:56:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67982 00:10:24.082 05:56:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:24.082 05:56:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:25.014 05:56:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 989ef2bb-c6dc-4ffd-ade4-5cc4c0f760bb MY_SNAPSHOT 00:10:25.271 05:56:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d63e1fac-0a08-401d-8a92-1f6c37ecd978 00:10:25.271 05:56:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 989ef2bb-c6dc-4ffd-ade4-5cc4c0f760bb 30 00:10:25.529 05:56:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d63e1fac-0a08-401d-8a92-1f6c37ecd978 MY_CLONE 00:10:26.095 05:56:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d98c8f8d-f1b4-4df2-a353-b9c5982deda3 00:10:26.095 05:56:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate d98c8f8d-f1b4-4df2-a353-b9c5982deda3 00:10:26.353 05:56:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67982 00:10:34.469 Initializing NVMe Controllers 00:10:34.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:34.469 Controller IO queue size 128, less than required. 00:10:34.469 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:34.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:34.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:34.469 Initialization complete. Launching workers. 00:10:34.469 ======================================================== 00:10:34.469 Latency(us) 00:10:34.469 Device Information : IOPS MiB/s Average min max 00:10:34.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9116.90 35.61 14045.58 295.83 149649.57 00:10:34.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8957.00 34.99 14292.45 4297.22 168025.13 00:10:34.469 ======================================================== 00:10:34.469 Total : 18073.90 70.60 14167.92 295.83 168025.13 00:10:34.469 00:10:34.469 05:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:34.729 05:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 989ef2bb-c6dc-4ffd-ade4-5cc4c0f760bb 00:10:34.729 05:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05b46e30-b3bf-4bf3-9fe8-a7d2048bfb5a 00:10:34.992 05:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:34.992 05:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:34.992 05:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:34.992 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:34.992 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:35.274 rmmod nvme_tcp 00:10:35.274 rmmod nvme_fabrics 00:10:35.274 rmmod nvme_keyring 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 67906 ']' 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 67906 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 67906 ']' 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 67906 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.274 05:56:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67906 00:10:35.274 killing process with pid 67906 00:10:35.274 05:56:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:35.274 05:56:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:35.274 05:56:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67906' 00:10:35.274 05:56:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 67906 00:10:35.274 05:56:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 67906 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:36.667 ************************************ 00:10:36.667 END TEST nvmf_lvol 00:10:36.667 ************************************ 00:10:36.667 00:10:36.667 real 0m16.541s 00:10:36.667 user 1m6.768s 00:10:36.667 sys 0m3.784s 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:36.667 05:56:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:36.667 05:56:52 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:36.667 05:56:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:36.667 05:56:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.667 05:56:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.667 ************************************ 00:10:36.667 START TEST nvmf_lvs_grow 00:10:36.667 ************************************ 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:36.667 * Looking for test storage... 00:10:36.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.667 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:36.668 Cannot find device "nvmf_tgt_br" 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.668 Cannot find device "nvmf_tgt_br2" 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:36.668 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:36.927 Cannot find device "nvmf_tgt_br" 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:36.927 Cannot find device "nvmf_tgt_br2" 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:36.927 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:36.928 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.186 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:37.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:10:37.187 00:10:37.187 --- 10.0.0.2 ping statistics --- 00:10:37.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.187 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:37.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:10:37.187 00:10:37.187 --- 10.0.0.3 ping statistics --- 00:10:37.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.187 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:37.187 00:10:37.187 --- 10.0.0.1 ping statistics --- 00:10:37.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.187 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=68311 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 68311 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 68311 ']' 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.187 05:56:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.187 [2024-07-11 05:56:53.014616] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:37.187 [2024-07-11 05:56:53.014804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.446 [2024-07-11 05:56:53.188187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.446 [2024-07-11 05:56:53.353913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.446 [2024-07-11 05:56:53.353973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.446 [2024-07-11 05:56:53.354006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.446 [2024-07-11 05:56:53.354018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.446 [2024-07-11 05:56:53.354028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.446 [2024-07-11 05:56:53.354064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.716 [2024-07-11 05:56:53.521657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:38.290 05:56:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.290 05:56:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:10:38.290 05:56:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.290 05:56:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:38.290 05:56:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:38.290 05:56:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.290 05:56:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:38.290 [2024-07-11 05:56:54.163647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:38.290 ************************************ 00:10:38.290 START TEST lvs_grow_clean 00:10:38.290 ************************************ 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:38.290 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:38.291 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:38.858 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:38.858 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:38.858 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:38.858 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:38.858 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:39.116 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:39.116 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:39.116 05:56:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 lvol 150 00:10:39.460 05:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=03ef64dc-6712-4fa4-bd59-f3d864eaf118 00:10:39.460 05:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:39.460 05:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:39.718 [2024-07-11 05:56:55.387016] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:39.718 [2024-07-11 05:56:55.387133] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:39.718 true 00:10:39.718 05:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:39.718 05:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:39.718 05:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:39.718 05:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:39.976 05:56:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 03ef64dc-6712-4fa4-bd59-f3d864eaf118 00:10:40.235 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:40.493 [2024-07-11 05:56:56.312062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.493 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:40.751 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68394 00:10:40.751 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:40.751 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:40.751 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68394 /var/tmp/bdevperf.sock 00:10:40.751 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 68394 ']' 00:10:40.751 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:40.751 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.751 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:40.751 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.751 05:56:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:41.010 [2024-07-11 05:56:56.673884] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:41.010 [2024-07-11 05:56:56.674295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68394 ] 00:10:41.010 [2024-07-11 05:56:56.837860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.268 [2024-07-11 05:56:57.055152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.526 [2024-07-11 05:56:57.222761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:41.784 05:56:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.784 05:56:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:10:41.784 05:56:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:42.042 Nvme0n1 00:10:42.042 05:56:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:42.304 [ 00:10:42.304 { 00:10:42.304 "name": "Nvme0n1", 00:10:42.304 "aliases": [ 00:10:42.304 "03ef64dc-6712-4fa4-bd59-f3d864eaf118" 00:10:42.304 ], 00:10:42.304 "product_name": "NVMe disk", 00:10:42.304 "block_size": 4096, 00:10:42.304 "num_blocks": 38912, 00:10:42.304 "uuid": "03ef64dc-6712-4fa4-bd59-f3d864eaf118", 00:10:42.304 "assigned_rate_limits": { 00:10:42.304 "rw_ios_per_sec": 0, 00:10:42.304 "rw_mbytes_per_sec": 0, 00:10:42.304 "r_mbytes_per_sec": 0, 00:10:42.304 "w_mbytes_per_sec": 0 00:10:42.304 }, 00:10:42.304 "claimed": false, 00:10:42.304 "zoned": false, 00:10:42.304 "supported_io_types": { 00:10:42.304 "read": true, 00:10:42.304 "write": true, 00:10:42.304 "unmap": true, 00:10:42.304 "flush": true, 00:10:42.304 "reset": true, 00:10:42.304 "nvme_admin": true, 00:10:42.304 "nvme_io": true, 00:10:42.304 "nvme_io_md": false, 00:10:42.304 "write_zeroes": true, 00:10:42.304 "zcopy": false, 00:10:42.304 "get_zone_info": false, 00:10:42.304 "zone_management": false, 00:10:42.304 "zone_append": false, 00:10:42.304 "compare": true, 00:10:42.304 "compare_and_write": true, 00:10:42.304 "abort": true, 00:10:42.304 "seek_hole": false, 00:10:42.304 "seek_data": false, 00:10:42.304 "copy": true, 00:10:42.304 "nvme_iov_md": false 00:10:42.304 }, 00:10:42.304 "memory_domains": [ 00:10:42.304 { 00:10:42.304 "dma_device_id": "system", 00:10:42.304 "dma_device_type": 1 00:10:42.304 } 00:10:42.304 ], 00:10:42.305 "driver_specific": { 00:10:42.305 "nvme": [ 00:10:42.305 { 00:10:42.305 "trid": { 00:10:42.305 "trtype": "TCP", 00:10:42.305 "adrfam": "IPv4", 00:10:42.305 "traddr": "10.0.0.2", 00:10:42.305 "trsvcid": "4420", 00:10:42.305 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:42.305 }, 00:10:42.305 "ctrlr_data": { 00:10:42.305 "cntlid": 1, 00:10:42.305 "vendor_id": "0x8086", 00:10:42.305 "model_number": "SPDK bdev Controller", 00:10:42.305 "serial_number": "SPDK0", 00:10:42.305 "firmware_revision": "24.09", 00:10:42.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:42.305 "oacs": { 00:10:42.305 "security": 0, 00:10:42.305 "format": 0, 00:10:42.305 "firmware": 0, 00:10:42.305 "ns_manage": 0 00:10:42.305 }, 00:10:42.305 "multi_ctrlr": true, 00:10:42.305 "ana_reporting": false 00:10:42.305 }, 00:10:42.305 "vs": { 00:10:42.305 "nvme_version": "1.3" 00:10:42.305 }, 00:10:42.305 "ns_data": { 00:10:42.305 "id": 1, 00:10:42.305 "can_share": true 00:10:42.305 } 00:10:42.305 } 00:10:42.305 ], 00:10:42.305 "mp_policy": "active_passive" 00:10:42.305 } 00:10:42.305 } 00:10:42.305 ] 00:10:42.305 05:56:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68420 00:10:42.306 05:56:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:42.306 05:56:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:42.569 Running I/O for 10 seconds... 00:10:43.500 Latency(us) 00:10:43.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.500 Nvme0n1 : 1.00 5715.00 22.32 0.00 0.00 0.00 0.00 0.00 00:10:43.500 =================================================================================================================== 00:10:43.500 Total : 5715.00 22.32 0.00 0.00 0.00 0.00 0.00 00:10:43.500 00:10:44.431 05:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:44.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.431 Nvme0n1 : 2.00 5715.00 22.32 0.00 0.00 0.00 0.00 0.00 00:10:44.431 =================================================================================================================== 00:10:44.431 Total : 5715.00 22.32 0.00 0.00 0.00 0.00 0.00 00:10:44.431 00:10:44.689 true 00:10:44.689 05:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:44.689 05:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:44.949 05:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:44.949 05:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:44.949 05:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 68420 00:10:45.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.516 Nvme0n1 : 3.00 5799.67 22.65 0.00 0.00 0.00 0.00 0.00 00:10:45.516 =================================================================================================================== 00:10:45.516 Total : 5799.67 22.65 0.00 0.00 0.00 0.00 0.00 00:10:45.516 00:10:46.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.448 Nvme0n1 : 4.00 5810.25 22.70 0.00 0.00 0.00 0.00 0.00 00:10:46.448 =================================================================================================================== 00:10:46.448 Total : 5810.25 22.70 0.00 0.00 0.00 0.00 0.00 00:10:46.448 00:10:47.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.382 Nvme0n1 : 5.00 5791.20 22.62 0.00 0.00 0.00 0.00 0.00 00:10:47.382 =================================================================================================================== 00:10:47.382 Total : 5791.20 22.62 0.00 0.00 0.00 0.00 0.00 00:10:47.382 00:10:48.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.315 Nvme0n1 : 6.00 5778.50 22.57 0.00 0.00 0.00 0.00 0.00 00:10:48.315 =================================================================================================================== 00:10:48.315 Total : 5778.50 22.57 0.00 0.00 0.00 0.00 0.00 00:10:48.315 00:10:49.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:49.690 Nvme0n1 : 7.00 5769.43 22.54 0.00 0.00 0.00 0.00 0.00 00:10:49.690 =================================================================================================================== 00:10:49.690 Total : 5769.43 22.54 0.00 0.00 0.00 0.00 0.00 00:10:49.690 00:10:50.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.625 Nvme0n1 : 8.00 5683.62 22.20 0.00 0.00 0.00 0.00 0.00 00:10:50.625 =================================================================================================================== 00:10:50.625 Total : 5683.62 22.20 0.00 0.00 0.00 0.00 0.00 00:10:50.625 00:10:51.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.561 Nvme0n1 : 9.00 5687.11 22.22 0.00 0.00 0.00 0.00 0.00 00:10:51.561 =================================================================================================================== 00:10:51.561 Total : 5687.11 22.22 0.00 0.00 0.00 0.00 0.00 00:10:51.561 00:10:52.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.496 Nvme0n1 : 10.00 5677.20 22.18 0.00 0.00 0.00 0.00 0.00 00:10:52.496 =================================================================================================================== 00:10:52.496 Total : 5677.20 22.18 0.00 0.00 0.00 0.00 0.00 00:10:52.496 00:10:52.496 00:10:52.496 Latency(us) 00:10:52.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.497 Nvme0n1 : 10.01 5681.91 22.19 0.00 0.00 22520.93 5987.61 128688.87 00:10:52.497 =================================================================================================================== 00:10:52.497 Total : 5681.91 22.19 0.00 0.00 22520.93 5987.61 128688.87 00:10:52.497 0 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68394 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 68394 ']' 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 68394 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68394 00:10:52.497 killing process with pid 68394 00:10:52.497 Received shutdown signal, test time was about 10.000000 seconds 00:10:52.497 00:10:52.497 Latency(us) 00:10:52.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.497 =================================================================================================================== 00:10:52.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68394' 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 68394 00:10:52.497 05:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 68394 00:10:53.433 05:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:53.691 05:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:53.950 05:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:53.950 05:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:54.208 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:54.208 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:54.208 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:54.467 [2024-07-11 05:57:10.242486] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:54.467 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:54.725 request: 00:10:54.725 { 00:10:54.725 "uuid": "d9d4e333-f226-4d8c-ba2e-3ea58552c343", 00:10:54.725 "method": "bdev_lvol_get_lvstores", 00:10:54.725 "req_id": 1 00:10:54.725 } 00:10:54.725 Got JSON-RPC error response 00:10:54.725 response: 00:10:54.725 { 00:10:54.725 "code": -19, 00:10:54.725 "message": "No such device" 00:10:54.725 } 00:10:54.725 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:54.725 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.725 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:54.725 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.725 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:54.984 aio_bdev 00:10:54.984 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 03ef64dc-6712-4fa4-bd59-f3d864eaf118 00:10:54.984 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=03ef64dc-6712-4fa4-bd59-f3d864eaf118 00:10:54.984 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:54.984 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:10:54.984 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:54.984 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:54.984 05:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:55.242 05:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 03ef64dc-6712-4fa4-bd59-f3d864eaf118 -t 2000 00:10:55.501 [ 00:10:55.501 { 00:10:55.501 "name": "03ef64dc-6712-4fa4-bd59-f3d864eaf118", 00:10:55.501 "aliases": [ 00:10:55.501 "lvs/lvol" 00:10:55.501 ], 00:10:55.501 "product_name": "Logical Volume", 00:10:55.501 "block_size": 4096, 00:10:55.501 "num_blocks": 38912, 00:10:55.501 "uuid": "03ef64dc-6712-4fa4-bd59-f3d864eaf118", 00:10:55.501 "assigned_rate_limits": { 00:10:55.501 "rw_ios_per_sec": 0, 00:10:55.501 "rw_mbytes_per_sec": 0, 00:10:55.501 "r_mbytes_per_sec": 0, 00:10:55.501 "w_mbytes_per_sec": 0 00:10:55.501 }, 00:10:55.501 "claimed": false, 00:10:55.501 "zoned": false, 00:10:55.501 "supported_io_types": { 00:10:55.501 "read": true, 00:10:55.501 "write": true, 00:10:55.501 "unmap": true, 00:10:55.501 "flush": false, 00:10:55.501 "reset": true, 00:10:55.501 "nvme_admin": false, 00:10:55.501 "nvme_io": false, 00:10:55.501 "nvme_io_md": false, 00:10:55.501 "write_zeroes": true, 00:10:55.501 "zcopy": false, 00:10:55.501 "get_zone_info": false, 00:10:55.501 "zone_management": false, 00:10:55.501 "zone_append": false, 00:10:55.501 "compare": false, 00:10:55.501 "compare_and_write": false, 00:10:55.501 "abort": false, 00:10:55.501 "seek_hole": true, 00:10:55.501 "seek_data": true, 00:10:55.501 "copy": false, 00:10:55.501 "nvme_iov_md": false 00:10:55.501 }, 00:10:55.501 "driver_specific": { 00:10:55.501 "lvol": { 00:10:55.501 "lvol_store_uuid": "d9d4e333-f226-4d8c-ba2e-3ea58552c343", 00:10:55.501 "base_bdev": "aio_bdev", 00:10:55.501 "thin_provision": false, 00:10:55.501 "num_allocated_clusters": 38, 00:10:55.501 "snapshot": false, 00:10:55.501 "clone": false, 00:10:55.501 "esnap_clone": false 00:10:55.501 } 00:10:55.501 } 00:10:55.501 } 00:10:55.501 ] 00:10:55.501 05:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:10:55.501 05:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:55.501 05:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:55.759 05:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:55.759 05:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:55.759 05:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:56.018 05:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:56.018 05:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 03ef64dc-6712-4fa4-bd59-f3d864eaf118 00:10:56.276 05:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d9d4e333-f226-4d8c-ba2e-3ea58552c343 00:10:56.276 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:56.534 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:57.101 00:10:57.101 ************************************ 00:10:57.101 END TEST lvs_grow_clean 00:10:57.101 ************************************ 00:10:57.101 real 0m18.546s 00:10:57.101 user 0m17.634s 00:10:57.101 sys 0m2.276s 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:57.101 ************************************ 00:10:57.101 START TEST lvs_grow_dirty 00:10:57.101 ************************************ 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:57.101 05:57:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:57.360 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:57.360 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:57.618 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b2806889-b3ed-4ff2-b040-0c71a442ea23 00:10:57.618 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:10:57.618 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:57.875 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:57.875 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:57.875 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b2806889-b3ed-4ff2-b040-0c71a442ea23 lvol 150 00:10:58.132 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=deca2f6d-2d0d-4db0-a7f8-7925d63c86f8 00:10:58.132 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:58.132 05:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:58.390 [2024-07-11 05:57:14.137038] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:58.390 [2024-07-11 05:57:14.137199] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:58.390 true 00:10:58.390 05:57:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:58.390 05:57:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:10:58.648 05:57:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:58.648 05:57:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:58.906 05:57:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 deca2f6d-2d0d-4db0-a7f8-7925d63c86f8 00:10:59.164 05:57:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:59.164 [2024-07-11 05:57:15.081793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68675 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68675 /var/tmp/bdevperf.sock 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 68675 ']' 00:10:59.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.422 05:57:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:59.681 [2024-07-11 05:57:15.430970] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:10:59.681 [2024-07-11 05:57:15.432002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68675 ] 00:10:59.939 [2024-07-11 05:57:15.602618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.939 [2024-07-11 05:57:15.826537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.199 [2024-07-11 05:57:16.019187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:00.766 05:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.766 05:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:00.766 05:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:01.024 Nvme0n1 00:11:01.024 05:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:01.282 [ 00:11:01.282 { 00:11:01.282 "name": "Nvme0n1", 00:11:01.282 "aliases": [ 00:11:01.282 "deca2f6d-2d0d-4db0-a7f8-7925d63c86f8" 00:11:01.282 ], 00:11:01.282 "product_name": "NVMe disk", 00:11:01.282 "block_size": 4096, 00:11:01.282 "num_blocks": 38912, 00:11:01.282 "uuid": "deca2f6d-2d0d-4db0-a7f8-7925d63c86f8", 00:11:01.282 "assigned_rate_limits": { 00:11:01.282 "rw_ios_per_sec": 0, 00:11:01.282 "rw_mbytes_per_sec": 0, 00:11:01.282 "r_mbytes_per_sec": 0, 00:11:01.282 "w_mbytes_per_sec": 0 00:11:01.282 }, 00:11:01.282 "claimed": false, 00:11:01.282 "zoned": false, 00:11:01.282 "supported_io_types": { 00:11:01.282 "read": true, 00:11:01.282 "write": true, 00:11:01.282 "unmap": true, 00:11:01.282 "flush": true, 00:11:01.282 "reset": true, 00:11:01.282 "nvme_admin": true, 00:11:01.282 "nvme_io": true, 00:11:01.282 "nvme_io_md": false, 00:11:01.282 "write_zeroes": true, 00:11:01.282 "zcopy": false, 00:11:01.282 "get_zone_info": false, 00:11:01.282 "zone_management": false, 00:11:01.282 "zone_append": false, 00:11:01.282 "compare": true, 00:11:01.282 "compare_and_write": true, 00:11:01.282 "abort": true, 00:11:01.282 "seek_hole": false, 00:11:01.282 "seek_data": false, 00:11:01.282 "copy": true, 00:11:01.282 "nvme_iov_md": false 00:11:01.282 }, 00:11:01.282 "memory_domains": [ 00:11:01.282 { 00:11:01.282 "dma_device_id": "system", 00:11:01.282 "dma_device_type": 1 00:11:01.283 } 00:11:01.283 ], 00:11:01.283 "driver_specific": { 00:11:01.283 "nvme": [ 00:11:01.283 { 00:11:01.283 "trid": { 00:11:01.283 "trtype": "TCP", 00:11:01.283 "adrfam": "IPv4", 00:11:01.283 "traddr": "10.0.0.2", 00:11:01.283 "trsvcid": "4420", 00:11:01.283 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:01.283 }, 00:11:01.283 "ctrlr_data": { 00:11:01.283 "cntlid": 1, 00:11:01.283 "vendor_id": "0x8086", 00:11:01.283 "model_number": "SPDK bdev Controller", 00:11:01.283 "serial_number": "SPDK0", 00:11:01.283 "firmware_revision": "24.09", 00:11:01.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:01.283 "oacs": { 00:11:01.283 "security": 0, 00:11:01.283 "format": 0, 00:11:01.283 "firmware": 0, 00:11:01.283 "ns_manage": 0 00:11:01.283 }, 00:11:01.283 "multi_ctrlr": true, 00:11:01.283 "ana_reporting": false 00:11:01.283 }, 00:11:01.283 "vs": { 00:11:01.283 "nvme_version": "1.3" 00:11:01.283 }, 00:11:01.283 "ns_data": { 00:11:01.283 "id": 1, 00:11:01.283 "can_share": true 00:11:01.283 } 00:11:01.283 } 00:11:01.283 ], 00:11:01.283 "mp_policy": "active_passive" 00:11:01.283 } 00:11:01.283 } 00:11:01.283 ] 00:11:01.283 05:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68703 00:11:01.283 05:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:01.283 05:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:01.283 Running I/O for 10 seconds... 00:11:02.656 Latency(us) 00:11:02.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.656 Nvme0n1 : 1.00 6096.00 23.81 0.00 0.00 0.00 0.00 0.00 00:11:02.656 =================================================================================================================== 00:11:02.656 Total : 6096.00 23.81 0.00 0.00 0.00 0.00 0.00 00:11:02.656 00:11:03.222 05:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:03.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.480 Nvme0n1 : 2.00 6096.00 23.81 0.00 0.00 0.00 0.00 0.00 00:11:03.480 =================================================================================================================== 00:11:03.480 Total : 6096.00 23.81 0.00 0.00 0.00 0.00 0.00 00:11:03.480 00:11:03.480 true 00:11:03.480 05:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:03.480 05:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:03.738 05:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:03.738 05:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:03.738 05:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68703 00:11:04.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.304 Nvme0n1 : 3.00 6053.67 23.65 0.00 0.00 0.00 0.00 0.00 00:11:04.304 =================================================================================================================== 00:11:04.304 Total : 6053.67 23.65 0.00 0.00 0.00 0.00 0.00 00:11:04.304 00:11:05.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.679 Nvme0n1 : 4.00 6064.25 23.69 0.00 0.00 0.00 0.00 0.00 00:11:05.679 =================================================================================================================== 00:11:05.679 Total : 6064.25 23.69 0.00 0.00 0.00 0.00 0.00 00:11:05.679 00:11:06.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.245 Nvme0n1 : 5.00 5959.80 23.28 0.00 0.00 0.00 0.00 0.00 00:11:06.245 =================================================================================================================== 00:11:06.245 Total : 5959.80 23.28 0.00 0.00 0.00 0.00 0.00 00:11:06.245 00:11:07.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:07.619 Nvme0n1 : 6.00 5961.33 23.29 0.00 0.00 0.00 0.00 0.00 00:11:07.619 =================================================================================================================== 00:11:07.619 Total : 5961.33 23.29 0.00 0.00 0.00 0.00 0.00 00:11:07.619 00:11:08.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:08.613 Nvme0n1 : 7.00 5962.43 23.29 0.00 0.00 0.00 0.00 0.00 00:11:08.613 =================================================================================================================== 00:11:08.613 Total : 5962.43 23.29 0.00 0.00 0.00 0.00 0.00 00:11:08.613 00:11:09.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:09.545 Nvme0n1 : 8.00 5931.50 23.17 0.00 0.00 0.00 0.00 0.00 00:11:09.545 =================================================================================================================== 00:11:09.545 Total : 5931.50 23.17 0.00 0.00 0.00 0.00 0.00 00:11:09.545 00:11:10.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:10.479 Nvme0n1 : 9.00 5893.33 23.02 0.00 0.00 0.00 0.00 0.00 00:11:10.479 =================================================================================================================== 00:11:10.479 Total : 5893.33 23.02 0.00 0.00 0.00 0.00 0.00 00:11:10.479 00:11:11.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.411 Nvme0n1 : 10.00 5875.50 22.95 0.00 0.00 0.00 0.00 0.00 00:11:11.411 =================================================================================================================== 00:11:11.411 Total : 5875.50 22.95 0.00 0.00 0.00 0.00 0.00 00:11:11.411 00:11:11.411 00:11:11.412 Latency(us) 00:11:11.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.412 Nvme0n1 : 10.02 5877.45 22.96 0.00 0.00 21769.89 16801.05 108670.60 00:11:11.412 =================================================================================================================== 00:11:11.412 Total : 5877.45 22.96 0.00 0.00 21769.89 16801.05 108670.60 00:11:11.412 0 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68675 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 68675 ']' 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 68675 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68675 00:11:11.412 killing process with pid 68675 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68675' 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 68675 00:11:11.412 Received shutdown signal, test time was about 10.000000 seconds 00:11:11.412 00:11:11.412 Latency(us) 00:11:11.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.412 =================================================================================================================== 00:11:11.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:11.412 05:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 68675 00:11:12.789 05:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:12.789 05:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:13.048 05:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:13.048 05:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:13.307 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:13.307 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:13.307 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 68311 00:11:13.307 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 68311 00:11:13.307 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 68311 Killed "${NVMF_APP[@]}" "$@" 00:11:13.307 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:13.307 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:13.307 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.307 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:13.307 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:13.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.308 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=68845 00:11:13.308 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 68845 00:11:13.308 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:13.308 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 68845 ']' 00:11:13.308 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.308 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.308 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.308 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.308 05:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:13.308 [2024-07-11 05:57:29.190672] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:11:13.308 [2024-07-11 05:57:29.190835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.566 [2024-07-11 05:57:29.371696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.825 [2024-07-11 05:57:29.554993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.825 [2024-07-11 05:57:29.555073] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.825 [2024-07-11 05:57:29.555090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.825 [2024-07-11 05:57:29.555102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.825 [2024-07-11 05:57:29.555113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.825 [2024-07-11 05:57:29.555151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.825 [2024-07-11 05:57:29.723592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:14.390 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:14.390 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:14.390 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:14.390 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:14.390 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:14.390 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.390 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:14.648 [2024-07-11 05:57:30.367364] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:14.648 [2024-07-11 05:57:30.368401] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:14.648 [2024-07-11 05:57:30.368692] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:14.648 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:14.648 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev deca2f6d-2d0d-4db0-a7f8-7925d63c86f8 00:11:14.648 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=deca2f6d-2d0d-4db0-a7f8-7925d63c86f8 00:11:14.648 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:14.648 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:14.648 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:14.648 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:14.648 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:14.908 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b deca2f6d-2d0d-4db0-a7f8-7925d63c86f8 -t 2000 00:11:15.167 [ 00:11:15.167 { 00:11:15.167 "name": "deca2f6d-2d0d-4db0-a7f8-7925d63c86f8", 00:11:15.167 "aliases": [ 00:11:15.167 "lvs/lvol" 00:11:15.167 ], 00:11:15.167 "product_name": "Logical Volume", 00:11:15.167 "block_size": 4096, 00:11:15.167 "num_blocks": 38912, 00:11:15.167 "uuid": "deca2f6d-2d0d-4db0-a7f8-7925d63c86f8", 00:11:15.167 "assigned_rate_limits": { 00:11:15.167 "rw_ios_per_sec": 0, 00:11:15.167 "rw_mbytes_per_sec": 0, 00:11:15.167 "r_mbytes_per_sec": 0, 00:11:15.167 "w_mbytes_per_sec": 0 00:11:15.167 }, 00:11:15.167 "claimed": false, 00:11:15.167 "zoned": false, 00:11:15.167 "supported_io_types": { 00:11:15.167 "read": true, 00:11:15.167 "write": true, 00:11:15.167 "unmap": true, 00:11:15.167 "flush": false, 00:11:15.167 "reset": true, 00:11:15.167 "nvme_admin": false, 00:11:15.167 "nvme_io": false, 00:11:15.167 "nvme_io_md": false, 00:11:15.167 "write_zeroes": true, 00:11:15.167 "zcopy": false, 00:11:15.167 "get_zone_info": false, 00:11:15.167 "zone_management": false, 00:11:15.167 "zone_append": false, 00:11:15.167 "compare": false, 00:11:15.167 "compare_and_write": false, 00:11:15.167 "abort": false, 00:11:15.167 "seek_hole": true, 00:11:15.167 "seek_data": true, 00:11:15.167 "copy": false, 00:11:15.167 "nvme_iov_md": false 00:11:15.167 }, 00:11:15.167 "driver_specific": { 00:11:15.167 "lvol": { 00:11:15.167 "lvol_store_uuid": "b2806889-b3ed-4ff2-b040-0c71a442ea23", 00:11:15.167 "base_bdev": "aio_bdev", 00:11:15.167 "thin_provision": false, 00:11:15.167 "num_allocated_clusters": 38, 00:11:15.167 "snapshot": false, 00:11:15.167 "clone": false, 00:11:15.167 "esnap_clone": false 00:11:15.167 } 00:11:15.167 } 00:11:15.167 } 00:11:15.167 ] 00:11:15.167 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:15.167 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:15.167 05:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:15.426 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:15.426 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:15.426 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:15.426 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:15.426 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:15.685 [2024-07-11 05:57:31.537058] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:15.685 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:15.944 request: 00:11:15.944 { 00:11:15.944 "uuid": "b2806889-b3ed-4ff2-b040-0c71a442ea23", 00:11:15.944 "method": "bdev_lvol_get_lvstores", 00:11:15.944 "req_id": 1 00:11:15.944 } 00:11:15.944 Got JSON-RPC error response 00:11:15.944 response: 00:11:15.944 { 00:11:15.944 "code": -19, 00:11:15.944 "message": "No such device" 00:11:15.944 } 00:11:15.944 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:15.944 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:15.944 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:15.944 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:15.944 05:57:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:16.203 aio_bdev 00:11:16.203 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev deca2f6d-2d0d-4db0-a7f8-7925d63c86f8 00:11:16.203 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=deca2f6d-2d0d-4db0-a7f8-7925d63c86f8 00:11:16.203 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:16.203 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:16.203 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:16.203 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:16.203 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:16.462 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b deca2f6d-2d0d-4db0-a7f8-7925d63c86f8 -t 2000 00:11:16.721 [ 00:11:16.721 { 00:11:16.721 "name": "deca2f6d-2d0d-4db0-a7f8-7925d63c86f8", 00:11:16.721 "aliases": [ 00:11:16.721 "lvs/lvol" 00:11:16.721 ], 00:11:16.721 "product_name": "Logical Volume", 00:11:16.721 "block_size": 4096, 00:11:16.721 "num_blocks": 38912, 00:11:16.721 "uuid": "deca2f6d-2d0d-4db0-a7f8-7925d63c86f8", 00:11:16.721 "assigned_rate_limits": { 00:11:16.721 "rw_ios_per_sec": 0, 00:11:16.721 "rw_mbytes_per_sec": 0, 00:11:16.721 "r_mbytes_per_sec": 0, 00:11:16.721 "w_mbytes_per_sec": 0 00:11:16.721 }, 00:11:16.721 "claimed": false, 00:11:16.721 "zoned": false, 00:11:16.721 "supported_io_types": { 00:11:16.721 "read": true, 00:11:16.721 "write": true, 00:11:16.721 "unmap": true, 00:11:16.721 "flush": false, 00:11:16.721 "reset": true, 00:11:16.721 "nvme_admin": false, 00:11:16.721 "nvme_io": false, 00:11:16.721 "nvme_io_md": false, 00:11:16.721 "write_zeroes": true, 00:11:16.721 "zcopy": false, 00:11:16.721 "get_zone_info": false, 00:11:16.721 "zone_management": false, 00:11:16.721 "zone_append": false, 00:11:16.721 "compare": false, 00:11:16.721 "compare_and_write": false, 00:11:16.721 "abort": false, 00:11:16.721 "seek_hole": true, 00:11:16.721 "seek_data": true, 00:11:16.721 "copy": false, 00:11:16.721 "nvme_iov_md": false 00:11:16.721 }, 00:11:16.721 "driver_specific": { 00:11:16.721 "lvol": { 00:11:16.721 "lvol_store_uuid": "b2806889-b3ed-4ff2-b040-0c71a442ea23", 00:11:16.721 "base_bdev": "aio_bdev", 00:11:16.721 "thin_provision": false, 00:11:16.721 "num_allocated_clusters": 38, 00:11:16.721 "snapshot": false, 00:11:16.721 "clone": false, 00:11:16.721 "esnap_clone": false 00:11:16.721 } 00:11:16.721 } 00:11:16.721 } 00:11:16.721 ] 00:11:16.721 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:16.721 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:16.721 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:16.721 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:16.721 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:16.721 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:16.979 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:16.979 05:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete deca2f6d-2d0d-4db0-a7f8-7925d63c86f8 00:11:17.238 05:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2806889-b3ed-4ff2-b040-0c71a442ea23 00:11:17.496 05:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:17.754 05:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:18.322 ************************************ 00:11:18.322 END TEST lvs_grow_dirty 00:11:18.322 ************************************ 00:11:18.322 00:11:18.323 real 0m21.161s 00:11:18.323 user 0m45.585s 00:11:18.323 sys 0m8.157s 00:11:18.323 05:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.323 05:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:18.323 05:57:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:18.323 nvmf_trace.0 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.323 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:18.581 rmmod nvme_tcp 00:11:18.581 rmmod nvme_fabrics 00:11:18.581 rmmod nvme_keyring 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 68845 ']' 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 68845 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 68845 ']' 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 68845 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68845 00:11:18.581 killing process with pid 68845 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68845' 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 68845 00:11:18.581 05:57:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 68845 00:11:19.958 05:57:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.958 05:57:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:19.958 05:57:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:19.958 05:57:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.958 05:57:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.958 05:57:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.958 05:57:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.958 05:57:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.959 05:57:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:19.959 ************************************ 00:11:19.959 END TEST nvmf_lvs_grow 00:11:19.959 ************************************ 00:11:19.959 00:11:19.959 real 0m43.070s 00:11:19.959 user 1m10.046s 00:11:19.959 sys 0m11.359s 00:11:19.959 05:57:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.959 05:57:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:19.959 05:57:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:19.959 05:57:35 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:19.959 05:57:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:19.959 05:57:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.959 05:57:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.959 ************************************ 00:11:19.959 START TEST nvmf_bdev_io_wait 00:11:19.959 ************************************ 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:19.959 * Looking for test storage... 00:11:19.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:19.959 Cannot find device "nvmf_tgt_br" 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:19.959 Cannot find device "nvmf_tgt_br2" 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:19.959 Cannot find device "nvmf_tgt_br" 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:19.959 Cannot find device "nvmf_tgt_br2" 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:19.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:19.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:19.959 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:19.960 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:19.960 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:19.960 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:19.960 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:19.960 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:20.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:11:20.219 00:11:20.219 --- 10.0.0.2 ping statistics --- 00:11:20.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.219 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:20.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:20.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:11:20.219 00:11:20.219 --- 10.0.0.3 ping statistics --- 00:11:20.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.219 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:20.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:20.219 00:11:20.219 --- 10.0.0.1 ping statistics --- 00:11:20.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.219 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=69175 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 69175 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 69175 ']' 00:11:20.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.219 05:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:20.219 [2024-07-11 05:57:36.105612] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:11:20.219 [2024-07-11 05:57:36.106060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.477 [2024-07-11 05:57:36.278650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.749 [2024-07-11 05:57:36.471916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.749 [2024-07-11 05:57:36.471997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.749 [2024-07-11 05:57:36.472019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.749 [2024-07-11 05:57:36.472050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.749 [2024-07-11 05:57:36.472066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.749 [2024-07-11 05:57:36.472284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.749 [2024-07-11 05:57:36.473070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.749 [2024-07-11 05:57:36.473269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.749 [2024-07-11 05:57:36.473272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.328 05:57:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.328 05:57:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:11:21.328 05:57:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:21.328 05:57:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:21.328 05:57:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:21.328 [2024-07-11 05:57:37.220194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:21.328 [2024-07-11 05:57:37.237223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.328 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:21.587 Malloc0 00:11:21.587 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.587 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.587 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.587 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:21.587 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.587 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.587 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:21.588 [2024-07-11 05:57:37.351025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=69211 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=69213 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:21.588 { 00:11:21.588 "params": { 00:11:21.588 "name": "Nvme$subsystem", 00:11:21.588 "trtype": "$TEST_TRANSPORT", 00:11:21.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:21.588 "adrfam": "ipv4", 00:11:21.588 "trsvcid": "$NVMF_PORT", 00:11:21.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:21.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:21.588 "hdgst": ${hdgst:-false}, 00:11:21.588 "ddgst": ${ddgst:-false} 00:11:21.588 }, 00:11:21.588 "method": "bdev_nvme_attach_controller" 00:11:21.588 } 00:11:21.588 EOF 00:11:21.588 )") 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=69215 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:21.588 { 00:11:21.588 "params": { 00:11:21.588 "name": "Nvme$subsystem", 00:11:21.588 "trtype": "$TEST_TRANSPORT", 00:11:21.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:21.588 "adrfam": "ipv4", 00:11:21.588 "trsvcid": "$NVMF_PORT", 00:11:21.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:21.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:21.588 "hdgst": ${hdgst:-false}, 00:11:21.588 "ddgst": ${ddgst:-false} 00:11:21.588 }, 00:11:21.588 "method": "bdev_nvme_attach_controller" 00:11:21.588 } 00:11:21.588 EOF 00:11:21.588 )") 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=69218 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:21.588 { 00:11:21.588 "params": { 00:11:21.588 "name": "Nvme$subsystem", 00:11:21.588 "trtype": "$TEST_TRANSPORT", 00:11:21.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:21.588 "adrfam": "ipv4", 00:11:21.588 "trsvcid": "$NVMF_PORT", 00:11:21.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:21.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:21.588 "hdgst": ${hdgst:-false}, 00:11:21.588 "ddgst": ${ddgst:-false} 00:11:21.588 }, 00:11:21.588 "method": "bdev_nvme_attach_controller" 00:11:21.588 } 00:11:21.588 EOF 00:11:21.588 )") 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:21.588 { 00:11:21.588 "params": { 00:11:21.588 "name": "Nvme$subsystem", 00:11:21.588 "trtype": "$TEST_TRANSPORT", 00:11:21.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:21.588 "adrfam": "ipv4", 00:11:21.588 "trsvcid": "$NVMF_PORT", 00:11:21.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:21.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:21.588 "hdgst": ${hdgst:-false}, 00:11:21.588 "ddgst": ${ddgst:-false} 00:11:21.588 }, 00:11:21.588 "method": "bdev_nvme_attach_controller" 00:11:21.588 } 00:11:21.588 EOF 00:11:21.588 )") 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:21.588 "params": { 00:11:21.588 "name": "Nvme1", 00:11:21.588 "trtype": "tcp", 00:11:21.588 "traddr": "10.0.0.2", 00:11:21.588 "adrfam": "ipv4", 00:11:21.588 "trsvcid": "4420", 00:11:21.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:21.588 "hdgst": false, 00:11:21.588 "ddgst": false 00:11:21.588 }, 00:11:21.588 "method": "bdev_nvme_attach_controller" 00:11:21.588 }' 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:21.588 "params": { 00:11:21.588 "name": "Nvme1", 00:11:21.588 "trtype": "tcp", 00:11:21.588 "traddr": "10.0.0.2", 00:11:21.588 "adrfam": "ipv4", 00:11:21.588 "trsvcid": "4420", 00:11:21.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:21.588 "hdgst": false, 00:11:21.588 "ddgst": false 00:11:21.588 }, 00:11:21.588 "method": "bdev_nvme_attach_controller" 00:11:21.588 }' 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:21.588 "params": { 00:11:21.588 "name": "Nvme1", 00:11:21.588 "trtype": "tcp", 00:11:21.588 "traddr": "10.0.0.2", 00:11:21.588 "adrfam": "ipv4", 00:11:21.588 "trsvcid": "4420", 00:11:21.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:21.588 "hdgst": false, 00:11:21.588 "ddgst": false 00:11:21.588 }, 00:11:21.588 "method": "bdev_nvme_attach_controller" 00:11:21.588 }' 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:21.588 "params": { 00:11:21.588 "name": "Nvme1", 00:11:21.588 "trtype": "tcp", 00:11:21.588 "traddr": "10.0.0.2", 00:11:21.588 "adrfam": "ipv4", 00:11:21.588 "trsvcid": "4420", 00:11:21.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:21.588 "hdgst": false, 00:11:21.588 "ddgst": false 00:11:21.588 }, 00:11:21.588 "method": "bdev_nvme_attach_controller" 00:11:21.588 }' 00:11:21.588 05:57:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 69211 00:11:21.588 [2024-07-11 05:57:37.453430] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:11:21.588 [2024-07-11 05:57:37.453781] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:21.588 [2024-07-11 05:57:37.486016] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:11:21.588 [2024-07-11 05:57:37.486394] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-11 05:57:37.486379] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:11:21.588 [2024-07-11 05:57:37.486510] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:21.588 [2024-07-11 05:57:37.486559] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:11:21.588 [2024-07-11 05:57:37.486721] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:21.588 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:21.847 [2024-07-11 05:57:37.669724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.847 [2024-07-11 05:57:37.707099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.847 [2024-07-11 05:57:37.755139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.106 [2024-07-11 05:57:37.801173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.106 [2024-07-11 05:57:37.882135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:22.106 [2024-07-11 05:57:37.906104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:22.106 [2024-07-11 05:57:37.967985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:22.106 [2024-07-11 05:57:38.015009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:22.365 [2024-07-11 05:57:38.068874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:22.365 [2024-07-11 05:57:38.103778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:22.365 [2024-07-11 05:57:38.152674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:22.365 [2024-07-11 05:57:38.212155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:22.365 Running I/O for 1 seconds... 00:11:22.365 Running I/O for 1 seconds... 00:11:22.623 Running I/O for 1 seconds... 00:11:22.623 Running I/O for 1 seconds... 00:11:23.559 00:11:23.559 Latency(us) 00:11:23.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.559 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:23.559 Nvme1n1 : 1.03 4861.17 18.99 0.00 0.00 25786.48 8460.10 48615.80 00:11:23.559 =================================================================================================================== 00:11:23.559 Total : 4861.17 18.99 0.00 0.00 25786.48 8460.10 48615.80 00:11:23.559 00:11:23.559 Latency(us) 00:11:23.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.559 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:23.559 Nvme1n1 : 1.00 140427.61 548.55 0.00 0.00 908.29 407.74 1906.50 00:11:23.559 =================================================================================================================== 00:11:23.559 Total : 140427.61 548.55 0.00 0.00 908.29 407.74 1906.50 00:11:23.559 00:11:23.559 Latency(us) 00:11:23.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.559 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:23.559 Nvme1n1 : 1.01 6945.44 27.13 0.00 0.00 18311.54 4766.25 28716.68 00:11:23.559 =================================================================================================================== 00:11:23.559 Total : 6945.44 27.13 0.00 0.00 18311.54 4766.25 28716.68 00:11:23.559 00:11:23.559 Latency(us) 00:11:23.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.559 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:23.559 Nvme1n1 : 1.01 4391.97 17.16 0.00 0.00 28990.16 10187.87 52428.80 00:11:23.559 =================================================================================================================== 00:11:23.559 Total : 4391.97 17.16 0.00 0.00 28990.16 10187.87 52428.80 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 69213 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 69215 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 69218 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.519 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.778 rmmod nvme_tcp 00:11:24.778 rmmod nvme_fabrics 00:11:24.778 rmmod nvme_keyring 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 69175 ']' 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 69175 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 69175 ']' 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 69175 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69175 00:11:24.778 killing process with pid 69175 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69175' 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 69175 00:11:24.778 05:57:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 69175 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:26.154 ************************************ 00:11:26.154 END TEST nvmf_bdev_io_wait 00:11:26.154 ************************************ 00:11:26.154 00:11:26.154 real 0m6.166s 00:11:26.154 user 0m27.833s 00:11:26.154 sys 0m2.421s 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.154 05:57:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:26.154 05:57:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:26.154 05:57:41 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:26.154 05:57:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:26.154 05:57:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.154 05:57:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:26.154 ************************************ 00:11:26.154 START TEST nvmf_queue_depth 00:11:26.154 ************************************ 00:11:26.154 05:57:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:26.154 * Looking for test storage... 00:11:26.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:26.154 05:57:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:26.154 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:26.154 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.154 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.154 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.154 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:26.155 Cannot find device "nvmf_tgt_br" 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:26.155 Cannot find device "nvmf_tgt_br2" 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:26.155 Cannot find device "nvmf_tgt_br" 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:26.155 Cannot find device "nvmf_tgt_br2" 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:26.155 05:57:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:26.155 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:26.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.155 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:26.155 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:26.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.155 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:26.155 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:26.155 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:26.155 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:26.155 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:26.155 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:26.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:11:26.414 00:11:26.414 --- 10.0.0.2 ping statistics --- 00:11:26.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.414 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:26.414 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:26.414 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:26.414 00:11:26.414 --- 10.0.0.3 ping statistics --- 00:11:26.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.414 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:26.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:26.414 00:11:26.414 --- 10.0.0.1 ping statistics --- 00:11:26.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.414 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=69476 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 69476 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 69476 ']' 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.414 05:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.415 05:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.415 05:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:26.673 [2024-07-11 05:57:42.379194] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:11:26.673 [2024-07-11 05:57:42.379612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.673 [2024-07-11 05:57:42.554846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.932 [2024-07-11 05:57:42.770296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.932 [2024-07-11 05:57:42.770368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.932 [2024-07-11 05:57:42.770385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.932 [2024-07-11 05:57:42.770397] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.932 [2024-07-11 05:57:42.770407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.932 [2024-07-11 05:57:42.770442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.191 [2024-07-11 05:57:42.951124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:27.450 [2024-07-11 05:57:43.361529] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.450 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:27.709 Malloc0 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:27.709 [2024-07-11 05:57:43.455694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=69508 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 69508 /var/tmp/bdevperf.sock 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 69508 ']' 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.709 05:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:27.709 [2024-07-11 05:57:43.547175] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:11:27.709 [2024-07-11 05:57:43.547548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69508 ] 00:11:27.967 [2024-07-11 05:57:43.701047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.226 [2024-07-11 05:57:43.917913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.226 [2024-07-11 05:57:44.079128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:28.793 05:57:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.793 05:57:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:28.793 05:57:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:28.793 05:57:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.793 05:57:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.793 NVMe0n1 00:11:28.793 05:57:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.793 05:57:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:28.793 Running I/O for 10 seconds... 00:11:40.998 00:11:40.998 Latency(us) 00:11:40.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.998 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:40.998 Verification LBA range: start 0x0 length 0x4000 00:11:40.998 NVMe0n1 : 10.08 6937.77 27.10 0.00 0.00 146793.65 10783.65 125829.12 00:11:40.998 =================================================================================================================== 00:11:40.998 Total : 6937.77 27.10 0.00 0.00 146793.65 10783.65 125829.12 00:11:40.998 0 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 69508 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 69508 ']' 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 69508 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69508 00:11:40.998 killing process with pid 69508 00:11:40.998 Received shutdown signal, test time was about 10.000000 seconds 00:11:40.998 00:11:40.998 Latency(us) 00:11:40.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.998 =================================================================================================================== 00:11:40.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69508' 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 69508 00:11:40.998 05:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 69508 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:40.998 rmmod nvme_tcp 00:11:40.998 rmmod nvme_fabrics 00:11:40.998 rmmod nvme_keyring 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 69476 ']' 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 69476 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 69476 ']' 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 69476 00:11:40.998 05:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:40.998 05:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.998 05:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69476 00:11:40.998 killing process with pid 69476 00:11:40.998 05:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:40.998 05:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:40.998 05:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69476' 00:11:40.998 05:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 69476 00:11:40.998 05:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 69476 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:41.565 00:11:41.565 real 0m15.519s 00:11:41.565 user 0m26.464s 00:11:41.565 sys 0m2.081s 00:11:41.565 ************************************ 00:11:41.565 END TEST nvmf_queue_depth 00:11:41.565 ************************************ 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.565 05:57:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:41.565 05:57:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:41.565 05:57:57 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:41.565 05:57:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.565 05:57:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.565 05:57:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.565 ************************************ 00:11:41.565 START TEST nvmf_target_multipath 00:11:41.565 ************************************ 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:41.565 * Looking for test storage... 00:11:41.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.565 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:41.566 Cannot find device "nvmf_tgt_br" 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:41.566 Cannot find device "nvmf_tgt_br2" 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:41.566 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:41.823 Cannot find device "nvmf_tgt_br" 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:41.823 Cannot find device "nvmf_tgt_br2" 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:41.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:41.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:41.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:11:41.823 00:11:41.823 --- 10.0.0.2 ping statistics --- 00:11:41.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.823 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:41.823 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:41.823 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:41.823 00:11:41.823 --- 10.0.0.3 ping statistics --- 00:11:41.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.823 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:41.823 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:42.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:11:42.081 00:11:42.081 --- 10.0.0.1 ping statistics --- 00:11:42.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.081 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:42.081 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.081 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:11:42.081 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:42.081 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.081 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:42.081 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:42.081 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.081 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:42.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=69856 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 69856 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 69856 ']' 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.082 05:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:42.082 [2024-07-11 05:57:57.895025] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:11:42.082 [2024-07-11 05:57:57.895498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.340 [2024-07-11 05:57:58.071494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.599 [2024-07-11 05:57:58.292149] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.599 [2024-07-11 05:57:58.292418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.599 [2024-07-11 05:57:58.292563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.599 [2024-07-11 05:57:58.292751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.599 [2024-07-11 05:57:58.292797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.599 [2024-07-11 05:57:58.293130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.599 [2024-07-11 05:57:58.294487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.599 [2024-07-11 05:57:58.294673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.599 [2024-07-11 05:57:58.294697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.599 [2024-07-11 05:57:58.460295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:43.182 05:57:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.182 05:57:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:11:43.182 05:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:43.182 05:57:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.182 05:57:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:43.182 05:57:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.182 05:57:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:43.444 [2024-07-11 05:57:59.117860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.444 05:57:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:43.704 Malloc0 00:11:43.704 05:57:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:43.962 05:57:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.221 05:58:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.480 [2024-07-11 05:58:00.207187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.480 05:58:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:44.739 [2024-07-11 05:58:00.459359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:44.739 05:58:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:11:44.739 05:58:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:44.997 05:58:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.997 05:58:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:11:44.997 05:58:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.997 05:58:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:44.997 05:58:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=69946 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:46.900 05:58:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:46.900 [global] 00:11:46.900 thread=1 00:11:46.900 invalidate=1 00:11:46.900 rw=randrw 00:11:46.900 time_based=1 00:11:46.900 runtime=6 00:11:46.900 ioengine=libaio 00:11:46.900 direct=1 00:11:46.900 bs=4096 00:11:46.900 iodepth=128 00:11:46.900 norandommap=0 00:11:46.900 numjobs=1 00:11:46.900 00:11:46.900 verify_dump=1 00:11:46.900 verify_backlog=512 00:11:46.900 verify_state_save=0 00:11:46.900 do_verify=1 00:11:46.900 verify=crc32c-intel 00:11:46.900 [job0] 00:11:46.900 filename=/dev/nvme0n1 00:11:46.900 Could not set queue depth (nvme0n1) 00:11:47.159 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.159 fio-3.35 00:11:47.159 Starting 1 thread 00:11:48.092 05:58:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:48.349 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:48.607 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:48.865 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:49.124 05:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 69946 00:11:53.312 00:11:53.312 job0: (groupid=0, jobs=1): err= 0: pid=69971: Thu Jul 11 05:58:09 2024 00:11:53.312 read: IOPS=8647, BW=33.8MiB/s (35.4MB/s)(203MiB/6007msec) 00:11:53.312 slat (usec): min=6, max=7131, avg=70.83, stdev=280.29 00:11:53.312 clat (usec): min=997, max=20327, avg=10204.59, stdev=1820.89 00:11:53.312 lat (usec): min=1013, max=20337, avg=10275.42, stdev=1824.95 00:11:53.312 clat percentiles (usec): 00:11:53.312 | 1.00th=[ 5145], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[ 9241], 00:11:53.312 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:11:53.312 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11863], 95.00th=[14353], 00:11:53.312 | 99.00th=[16057], 99.50th=[16581], 99.90th=[18482], 99.95th=[18744], 00:11:53.312 | 99.99th=[20317] 00:11:53.312 bw ( KiB/s): min= 5728, max=22976, per=52.26%, avg=18077.73, stdev=4455.70, samples=11 00:11:53.312 iops : min= 1432, max= 5744, avg=4519.36, stdev=1113.89, samples=11 00:11:53.312 write: IOPS=4918, BW=19.2MiB/s (20.1MB/s)(102MiB/5297msec); 0 zone resets 00:11:53.312 slat (usec): min=14, max=3478, avg=78.11, stdev=205.83 00:11:53.312 clat (usec): min=2832, max=18716, avg=8897.62, stdev=1679.75 00:11:53.312 lat (usec): min=2860, max=18745, avg=8975.73, stdev=1686.57 00:11:53.312 clat percentiles (usec): 00:11:53.312 | 1.00th=[ 3982], 5.00th=[ 5145], 10.00th=[ 6849], 20.00th=[ 8225], 00:11:53.312 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:11:53.312 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10814], 00:11:53.312 | 99.00th=[13960], 99.50th=[14877], 99.90th=[16581], 99.95th=[17171], 00:11:53.312 | 99.99th=[18220] 00:11:53.312 bw ( KiB/s): min= 6136, max=22360, per=91.71%, avg=18042.00, stdev=4274.74, samples=11 00:11:53.312 iops : min= 1534, max= 5590, avg=4510.36, stdev=1068.61, samples=11 00:11:53.312 lat (usec) : 1000=0.01% 00:11:53.312 lat (msec) : 2=0.01%, 4=0.46%, 10=58.47%, 20=41.05%, 50=0.01% 00:11:53.312 cpu : usr=4.96%, sys=18.90%, ctx=4492, majf=0, minf=78 00:11:53.312 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:53.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.312 issued rwts: total=51945,26052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.312 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.312 00:11:53.312 Run status group 0 (all jobs): 00:11:53.312 READ: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=203MiB (213MB), run=6007-6007msec 00:11:53.312 WRITE: bw=19.2MiB/s (20.1MB/s), 19.2MiB/s-19.2MiB/s (20.1MB/s-20.1MB/s), io=102MiB (107MB), run=5297-5297msec 00:11:53.312 00:11:53.312 Disk stats (read/write): 00:11:53.312 nvme0n1: ios=51205/25580, merge=0/0, ticks=503454/215323, in_queue=718777, util=98.65% 00:11:53.312 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:53.571 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=70047 00:11:53.829 05:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:53.829 [global] 00:11:53.829 thread=1 00:11:53.829 invalidate=1 00:11:53.829 rw=randrw 00:11:53.829 time_based=1 00:11:53.829 runtime=6 00:11:53.829 ioengine=libaio 00:11:53.829 direct=1 00:11:53.829 bs=4096 00:11:53.829 iodepth=128 00:11:53.829 norandommap=0 00:11:53.829 numjobs=1 00:11:53.829 00:11:53.829 verify_dump=1 00:11:53.829 verify_backlog=512 00:11:53.829 verify_state_save=0 00:11:53.829 do_verify=1 00:11:53.829 verify=crc32c-intel 00:11:53.829 [job0] 00:11:53.829 filename=/dev/nvme0n1 00:11:53.829 Could not set queue depth (nvme0n1) 00:11:54.087 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:54.087 fio-3.35 00:11:54.087 Starting 1 thread 00:11:55.020 05:58:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:55.020 05:58:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:55.586 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:55.844 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:55.844 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:55.844 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:55.844 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:55.844 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:55.844 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:55.844 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:55.844 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:55.844 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:55.844 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:55.845 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:55.845 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:55.845 05:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 70047 00:12:00.031 00:12:00.031 job0: (groupid=0, jobs=1): err= 0: pid=70073: Thu Jul 11 05:58:15 2024 00:12:00.031 read: IOPS=9934, BW=38.8MiB/s (40.7MB/s)(233MiB/6007msec) 00:12:00.031 slat (usec): min=6, max=7153, avg=52.57, stdev=226.85 00:12:00.031 clat (usec): min=373, max=18044, avg=8968.32, stdev=2522.17 00:12:00.031 lat (usec): min=391, max=18052, avg=9020.89, stdev=2538.48 00:12:00.031 clat percentiles (usec): 00:12:00.031 | 1.00th=[ 2147], 5.00th=[ 3982], 10.00th=[ 5276], 20.00th=[ 7373], 00:12:00.031 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:12:00.031 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11207], 95.00th=[13304], 00:12:00.031 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16712], 99.95th=[16909], 00:12:00.031 | 99.99th=[17433] 00:12:00.031 bw ( KiB/s): min= 7744, max=36112, per=51.49%, avg=20463.33, stdev=8013.71, samples=12 00:12:00.031 iops : min= 1936, max= 9028, avg=5115.83, stdev=2003.43, samples=12 00:12:00.031 write: IOPS=5877, BW=23.0MiB/s (24.1MB/s)(120MiB/5235msec); 0 zone resets 00:12:00.031 slat (usec): min=12, max=2280, avg=58.65, stdev=161.16 00:12:00.031 clat (usec): min=969, max=16556, avg=7391.19, stdev=2338.87 00:12:00.031 lat (usec): min=997, max=16586, avg=7449.84, stdev=2358.70 00:12:00.031 clat percentiles (usec): 00:12:00.031 | 1.00th=[ 2278], 5.00th=[ 3261], 10.00th=[ 3916], 20.00th=[ 4948], 00:12:00.031 | 30.00th=[ 5997], 40.00th=[ 7373], 50.00th=[ 8094], 60.00th=[ 8586], 00:12:00.031 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[10290], 00:12:00.031 | 99.00th=[12387], 99.50th=[13566], 99.90th=[15008], 99.95th=[15270], 00:12:00.031 | 99.99th=[16450] 00:12:00.031 bw ( KiB/s): min= 8192, max=35112, per=87.08%, avg=20472.67, stdev=7921.32, samples=12 00:12:00.031 iops : min= 2048, max= 8778, avg=5118.17, stdev=1980.33, samples=12 00:12:00.031 lat (usec) : 500=0.03%, 750=0.05%, 1000=0.08% 00:12:00.031 lat (msec) : 2=0.60%, 4=6.26%, 10=70.57%, 20=22.41% 00:12:00.031 cpu : usr=5.24%, sys=20.68%, ctx=5301, majf=0, minf=133 00:12:00.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:00.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:00.031 issued rwts: total=59678,30767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:00.031 00:12:00.032 Run status group 0 (all jobs): 00:12:00.032 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=233MiB (244MB), run=6007-6007msec 00:12:00.032 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=120MiB (126MB), run=5235-5235msec 00:12:00.032 00:12:00.032 Disk stats (read/write): 00:12:00.032 nvme0n1: ios=59074/30005, merge=0/0, ticks=509647/208899, in_queue=718546, util=98.66% 00:12:00.032 05:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:00.289 05:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.289 05:58:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:12:00.289 05:58:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:00.289 05:58:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.289 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.289 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:00.289 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:12:00.289 05:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.547 rmmod nvme_tcp 00:12:00.547 rmmod nvme_fabrics 00:12:00.547 rmmod nvme_keyring 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 69856 ']' 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 69856 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 69856 ']' 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 69856 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69856 00:12:00.547 killing process with pid 69856 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69856' 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 69856 00:12:00.547 05:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 69856 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:01.959 ************************************ 00:12:01.959 END TEST nvmf_target_multipath 00:12:01.959 ************************************ 00:12:01.959 00:12:01.959 real 0m20.188s 00:12:01.959 user 1m14.319s 00:12:01.959 sys 0m9.310s 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.959 05:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 05:58:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:01.959 05:58:17 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:01.959 05:58:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:01.959 05:58:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.959 05:58:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 ************************************ 00:12:01.959 START TEST nvmf_zcopy 00:12:01.959 ************************************ 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:01.959 * Looking for test storage... 00:12:01.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:01.959 Cannot find device "nvmf_tgt_br" 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:01.959 Cannot find device "nvmf_tgt_br2" 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:01.959 Cannot find device "nvmf_tgt_br" 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:01.959 Cannot find device "nvmf_tgt_br2" 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:01.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:01.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:01.959 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:02.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:12:02.218 00:12:02.218 --- 10.0.0.2 ping statistics --- 00:12:02.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.218 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:02.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:02.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:02.218 00:12:02.218 --- 10.0.0.3 ping statistics --- 00:12:02.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.218 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:02.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:02.218 00:12:02.218 --- 10.0.0.1 ping statistics --- 00:12:02.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.218 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:02.218 05:58:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:02.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=70329 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 70329 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 70329 ']' 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:02.218 05:58:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:02.218 [2024-07-11 05:58:18.090840] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:12:02.218 [2024-07-11 05:58:18.091629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.476 [2024-07-11 05:58:18.245147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.476 [2024-07-11 05:58:18.392955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.476 [2024-07-11 05:58:18.393193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.476 [2024-07-11 05:58:18.393270] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.476 [2024-07-11 05:58:18.393348] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.476 [2024-07-11 05:58:18.393414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.476 [2024-07-11 05:58:18.393523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.733 [2024-07-11 05:58:18.537496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.302 [2024-07-11 05:58:19.088538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.302 [2024-07-11 05:58:19.104614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.302 malloc0 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:03.302 { 00:12:03.302 "params": { 00:12:03.302 "name": "Nvme$subsystem", 00:12:03.302 "trtype": "$TEST_TRANSPORT", 00:12:03.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:03.302 "adrfam": "ipv4", 00:12:03.302 "trsvcid": "$NVMF_PORT", 00:12:03.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:03.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:03.302 "hdgst": ${hdgst:-false}, 00:12:03.302 "ddgst": ${ddgst:-false} 00:12:03.302 }, 00:12:03.302 "method": "bdev_nvme_attach_controller" 00:12:03.302 } 00:12:03.302 EOF 00:12:03.302 )") 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:03.302 05:58:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:03.302 "params": { 00:12:03.302 "name": "Nvme1", 00:12:03.302 "trtype": "tcp", 00:12:03.302 "traddr": "10.0.0.2", 00:12:03.302 "adrfam": "ipv4", 00:12:03.302 "trsvcid": "4420", 00:12:03.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:03.302 "hdgst": false, 00:12:03.302 "ddgst": false 00:12:03.302 }, 00:12:03.302 "method": "bdev_nvme_attach_controller" 00:12:03.302 }' 00:12:03.560 [2024-07-11 05:58:19.262577] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:12:03.560 [2024-07-11 05:58:19.262774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70362 ] 00:12:03.560 [2024-07-11 05:58:19.442039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.818 [2024-07-11 05:58:19.669188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.075 [2024-07-11 05:58:19.861054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:04.334 Running I/O for 10 seconds... 00:12:14.308 00:12:14.308 Latency(us) 00:12:14.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.308 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:14.308 Verification LBA range: start 0x0 length 0x1000 00:12:14.308 Nvme1n1 : 10.02 4640.45 36.25 0.00 0.00 27505.74 4051.32 35031.97 00:12:14.308 =================================================================================================================== 00:12:14.308 Total : 4640.45 36.25 0.00 0.00 27505.74 4051.32 35031.97 00:12:15.245 05:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=70491 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:15.246 { 00:12:15.246 "params": { 00:12:15.246 "name": "Nvme$subsystem", 00:12:15.246 "trtype": "$TEST_TRANSPORT", 00:12:15.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:15.246 "adrfam": "ipv4", 00:12:15.246 "trsvcid": "$NVMF_PORT", 00:12:15.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:15.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:15.246 "hdgst": ${hdgst:-false}, 00:12:15.246 "ddgst": ${ddgst:-false} 00:12:15.246 }, 00:12:15.246 "method": "bdev_nvme_attach_controller" 00:12:15.246 } 00:12:15.246 EOF 00:12:15.246 )") 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:15.246 [2024-07-11 05:58:30.952853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:30.953148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:15.246 05:58:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:15.246 "params": { 00:12:15.246 "name": "Nvme1", 00:12:15.246 "trtype": "tcp", 00:12:15.246 "traddr": "10.0.0.2", 00:12:15.246 "adrfam": "ipv4", 00:12:15.246 "trsvcid": "4420", 00:12:15.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.246 "hdgst": false, 00:12:15.246 "ddgst": false 00:12:15.246 }, 00:12:15.246 "method": "bdev_nvme_attach_controller" 00:12:15.246 }' 00:12:15.246 [2024-07-11 05:58:30.964801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:30.964841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:30.976810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:30.977003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:30.988803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:30.989015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.000816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.001025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.012825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.012996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.024804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.025033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.036820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.036859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.039051] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:12:15.246 [2024-07-11 05:58:31.039754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70491 ] 00:12:15.246 [2024-07-11 05:58:31.048796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.048967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.060838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.061057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.072804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.073026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.084834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.085040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.096887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.097079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.108824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.108862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.120880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.120941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.132841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.132879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.144865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.144911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.246 [2024-07-11 05:58:31.156862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.246 [2024-07-11 05:58:31.156899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.168869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.168926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.180889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.180946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.192875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.192933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.204279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.506 [2024-07-11 05:58:31.204874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.204906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.216938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.217005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.232916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.232986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.244880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.244935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.256911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.256949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.268897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.268952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.276943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.276996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.288916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.288971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.300970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.301033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.312922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.312980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.324980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.325050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.336925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.336965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.348969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.349006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.360953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.361007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.373045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.373082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.383366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.506 [2024-07-11 05:58:31.384992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.385031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.397065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.397107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.409032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.409089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.506 [2024-07-11 05:58:31.421032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.506 [2024-07-11 05:58:31.421086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.433003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.433050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.445061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.445097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.457031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.457079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.469092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.469148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.481065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.481121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.493027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.493063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.505067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.505124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.517107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.517149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.529058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.529113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.541077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.541113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.553069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.553123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.565091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.565127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.570534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:15.766 [2024-07-11 05:58:31.577128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.577193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.589141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.589187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.601145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.601217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.613128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.613163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.625110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.625150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.637158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.637194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.649136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.649175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.661149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.661187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.673158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.673213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.766 [2024-07-11 05:58:31.685219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.766 [2024-07-11 05:58:31.685283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.025 [2024-07-11 05:58:31.697307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.025 [2024-07-11 05:58:31.697347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.025 [2024-07-11 05:58:31.709240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.025 [2024-07-11 05:58:31.709284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.025 [2024-07-11 05:58:31.721274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.025 [2024-07-11 05:58:31.721313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.733279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.733337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.745291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.745333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 Running I/O for 5 seconds... 00:12:16.026 [2024-07-11 05:58:31.762237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.762334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.774758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.774823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.793342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.793391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.809964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.810017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.826704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.826774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.842739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.842835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.860134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.860179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.875642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.875751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.891485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.891543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.906772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.906871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.919524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.919568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.026 [2024-07-11 05:58:31.938679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.026 [2024-07-11 05:58:31.938746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:31.953881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:31.953924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:31.971479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:31.971526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:31.987986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:31.988035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:32.004526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:32.004592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:32.021253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:32.021296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:32.037563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:32.037625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:32.049655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:32.049759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:32.065946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:32.066006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:32.082065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:32.082109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:32.094229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:32.094289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.285 [2024-07-11 05:58:32.112871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.285 [2024-07-11 05:58:32.112913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.286 [2024-07-11 05:58:32.128839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.286 [2024-07-11 05:58:32.128899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.286 [2024-07-11 05:58:32.146857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.286 [2024-07-11 05:58:32.146917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.286 [2024-07-11 05:58:32.162723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.286 [2024-07-11 05:58:32.162788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.286 [2024-07-11 05:58:32.180230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.286 [2024-07-11 05:58:32.180298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.286 [2024-07-11 05:58:32.192912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.286 [2024-07-11 05:58:32.192957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.211999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.212065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.229130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.229202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.245042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.245085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.260472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.260554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.273402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.273444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.291846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.291906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.308088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.308134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.320640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.320738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.338261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.338303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.351784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.351853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.368884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.368934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.384857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.384919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.397780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.397825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.416513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.416575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.433609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.433696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.450130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.450179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.545 [2024-07-11 05:58:32.463521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.545 [2024-07-11 05:58:32.463598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.482636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.482707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.499019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.499091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.511828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.511893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.530760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.530803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.546415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.546476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.563843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.563885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.579919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.579981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.592480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.592520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.604620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.604727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.620102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.620150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.637862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.637958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.654068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.654109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.665398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.665460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.682818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.682860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.699556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.699656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.805 [2024-07-11 05:58:32.715243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.805 [2024-07-11 05:58:32.715285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.064 [2024-07-11 05:58:32.731000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.731077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.743257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.743320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.760173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.760243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.776925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.776968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.792481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.792526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.807746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.807819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.824891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.824953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.840256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.840304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.856885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.856947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.873419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.873462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.888820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.888881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.906005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.906062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.921240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.921318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.936868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.936911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.949611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.949719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.065 [2024-07-11 05:58:32.968502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.065 [2024-07-11 05:58:32.968544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:32.985388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:32.985467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:32.998298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:32.998341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.016560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:33.016609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.033015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:33.033058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.048432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:33.048492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.064966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:33.065010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.081976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:33.082044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.098343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:33.098385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.114229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:33.114289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.129409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:33.129452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.144911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:33.144973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.160336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.324 [2024-07-11 05:58:33.160396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.324 [2024-07-11 05:58:33.172895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.325 [2024-07-11 05:58:33.172955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.325 [2024-07-11 05:58:33.190994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.325 [2024-07-11 05:58:33.191035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.325 [2024-07-11 05:58:33.205345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.325 [2024-07-11 05:58:33.205387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.325 [2024-07-11 05:58:33.222937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.325 [2024-07-11 05:58:33.223001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.325 [2024-07-11 05:58:33.235934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.325 [2024-07-11 05:58:33.235990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.252901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.252943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.269226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.269286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.282438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.282482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.301773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.301831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.318847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.318891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.334503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.334547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.350349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.350392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.363043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.363137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.381980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.382026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.399086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.399130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.411555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.411597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.430059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.430163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.446627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.446714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.459004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.459062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.478205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.478255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.584 [2024-07-11 05:58:33.491773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.584 [2024-07-11 05:58:33.491817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.510248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.510352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.528218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.528265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.542082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.542138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.560616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.560685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.577392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.577447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.592951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.593008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.608623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.608703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.621092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.621146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.639527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.639581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.653169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.653224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.670843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.670914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.686570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.686625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.702404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.702458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.718970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.719025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.735259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.735313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.841 [2024-07-11 05:58:33.752315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.841 [2024-07-11 05:58:33.752386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.767421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.767474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.782428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.782482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.797946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.797999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.813120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.813176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.829851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.829890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.846628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.846712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.862602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.862663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.878128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.878173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.892996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.893050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.908723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.908786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.920243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.920301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.936487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.936542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.953264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.953318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.970092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.970147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.986186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.986239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:33.999110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:33.999164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.099 [2024-07-11 05:58:34.018228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.099 [2024-07-11 05:58:34.018274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.034782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.034826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.047246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.047306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.066415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.066470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.081912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.081957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.098990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.099059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.115227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.115281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.132253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.132310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.149035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.149089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.160867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.160923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.177702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.177757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.193860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.193914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.209073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.209142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.225572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.225627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.242746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.242817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.255369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.255432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.358 [2024-07-11 05:58:34.274198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.358 [2024-07-11 05:58:34.274251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.290720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.290773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.303496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.303550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.322516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.322586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.339139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.339193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.355689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.355757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.372711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.372778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.389355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.389410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.402127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.402182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.422135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.422189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.436414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.436470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.453824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.453882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.469831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.469870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.488323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.488368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.505448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.505504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.522879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.522949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.617 [2024-07-11 05:58:34.535701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.617 [2024-07-11 05:58:34.535767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.554173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.554227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.569515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.569569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.582193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.582264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.600779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.600832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.616701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.616767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.633454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.633508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.650282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.650336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.662443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.662498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.680564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.680622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.696939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.696995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.712492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.712545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.724815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.724869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.743480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.743535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.756951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.757006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.773525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.773579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.876 [2024-07-11 05:58:34.789471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.876 [2024-07-11 05:58:34.789540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.802519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.802574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.821065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.821172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.838674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.838741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.851571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.851625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.869063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.869117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.886056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.886101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.901316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.901369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.916451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.916519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.933399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.933453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.949044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.949099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.965146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.965201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.982166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.982263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:34.998461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:34.998518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:35.011759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:35.011818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:35.029980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:35.030066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.136 [2024-07-11 05:58:35.045781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.136 [2024-07-11 05:58:35.045835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.061802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.061861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.075497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.075553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.091896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.091952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.108817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.108871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.124732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.124785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.141347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.141401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.158401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.158459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.171405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.171446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.190192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.190247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.207037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.207096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.222317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.222371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.239231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.239286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.256486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.256539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.272902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.272956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.289614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.289697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.306552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.404 [2024-07-11 05:58:35.306605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.404 [2024-07-11 05:58:35.319429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.405 [2024-07-11 05:58:35.319474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.338848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.338905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.355037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.355090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.371641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.371705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.388820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.388875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.405273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.405327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.423099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.423154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.439618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.439736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.453205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.453247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.470249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.470305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.486954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.487010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.499458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.499513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.519539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.519601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.535279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.535334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.554878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.554931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.571086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.571141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.672 [2024-07-11 05:58:35.587042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.672 [2024-07-11 05:58:35.587127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.600081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.600124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.618860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.618914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.635500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.635553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.651370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.651424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.664329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.664373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.682863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.682916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.696723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.696805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.714417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.714472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.730828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.730884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.743205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.743258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.762045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.762120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.779108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.779179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.794730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.794794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.807217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.807271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.825621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.825686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.931 [2024-07-11 05:58:35.841570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.931 [2024-07-11 05:58:35.841624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.190 [2024-07-11 05:58:35.854970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.190 [2024-07-11 05:58:35.855014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.190 [2024-07-11 05:58:35.874317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.190 [2024-07-11 05:58:35.874378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.190 [2024-07-11 05:58:35.891046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.190 [2024-07-11 05:58:35.891101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.190 [2024-07-11 05:58:35.906218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:35.906273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:35.918623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:35.918704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:35.938169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:35.938230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:35.955099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:35.955153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:35.971982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:35.972075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:35.989196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:35.989251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:36.005153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:36.005199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:36.018337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:36.018434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:36.037806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:36.037849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:36.054908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:36.054962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:36.070295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:36.070340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:36.082575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:36.082633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.191 [2024-07-11 05:58:36.101699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.191 [2024-07-11 05:58:36.101763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.119245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.119299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.135632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.135713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.148710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.148777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.168196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.168241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.185053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.185108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.200419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.200474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.216144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.216200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.228602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.228687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.246653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.246720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.262450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.262504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.277923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.277997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.290539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.290593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.309104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.309159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.325526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.325591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.342216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.342270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.450 [2024-07-11 05:58:36.358824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.450 [2024-07-11 05:58:36.358878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.376121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.376165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.391201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.391272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.404165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.404208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.422299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.422353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.438890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.438943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.454803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.454889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.466772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.466826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.483273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.483327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.497044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.497112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.513643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.513706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.529864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.529933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.542600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.542671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.561706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.561774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.575576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.575633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.592616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.592679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.608016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.608109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.710 [2024-07-11 05:58:36.619227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-07-11 05:58:36.619281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-07-11 05:58:36.637092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-07-11 05:58:36.637146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-07-11 05:58:36.651956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-07-11 05:58:36.652036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-07-11 05:58:36.669532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-07-11 05:58:36.669587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-07-11 05:58:36.682371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-07-11 05:58:36.682427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-07-11 05:58:36.701858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-07-11 05:58:36.701914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-07-11 05:58:36.718086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-07-11 05:58:36.718156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-07-11 05:58:36.734889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.734945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.751441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.751497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.763735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.763803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 00:12:20.971 Latency(us) 00:12:20.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.971 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:20.971 Nvme1n1 : 5.01 8970.85 70.08 0.00 0.00 14248.17 5838.66 23473.80 00:12:20.971 =================================================================================================================== 00:12:20.971 Total : 8970.85 70.08 0.00 0.00 14248.17 5838.66 23473.80 00:12:20.971 [2024-07-11 05:58:36.775460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.775512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.787473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.787525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.799535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.799574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.811516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.811584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.823492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.823544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.835548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.835598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.847604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.847654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.859494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.859543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.871490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.871542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.971 [2024-07-11 05:58:36.883676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.971 [2024-07-11 05:58:36.883742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-07-11 05:58:36.895545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-07-11 05:58:36.895597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-07-11 05:58:36.907635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-07-11 05:58:36.907688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-07-11 05:58:36.919533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-07-11 05:58:36.919584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-07-11 05:58:36.931533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-07-11 05:58:36.931581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-07-11 05:58:36.943536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-07-11 05:58:36.943584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:36.955529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:36.955578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:36.967596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:36.967649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:36.979550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:36.979598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:36.991542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:36.991591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.003570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.003652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.015689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.015796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.027542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.027590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.039632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.039681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.051565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.051614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.063628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.063722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.075697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.075806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.087585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.087634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.099602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.099636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.111579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.111629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.123587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.123636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.135587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.135638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.231 [2024-07-11 05:58:37.147682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.231 [2024-07-11 05:58:37.147731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.159583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.159630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.171798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.171867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.183691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.183766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.195602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.195686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.207727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.207775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.219611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.219684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.231632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.231721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.243644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.243720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.255687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.255763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.267794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.267831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.279674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.279734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.291643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.291716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.303677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.303738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.315702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.315751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.327789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.327825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.339741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.339789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.351768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.351826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.363783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.363835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.375792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.375843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.387721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.387769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-07-11 05:58:37.399740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-07-11 05:58:37.399790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.749 [2024-07-11 05:58:37.411860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.749 [2024-07-11 05:58:37.411925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.749 [2024-07-11 05:58:37.423752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.749 [2024-07-11 05:58:37.423800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.749 [2024-07-11 05:58:37.435736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.749 [2024-07-11 05:58:37.435798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.749 [2024-07-11 05:58:37.447756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.749 [2024-07-11 05:58:37.447847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.749 [2024-07-11 05:58:37.459866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.749 [2024-07-11 05:58:37.459934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.749 [2024-07-11 05:58:37.471804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.749 [2024-07-11 05:58:37.471853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.483834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.483883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.495891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.495939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.507788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.507836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.519863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.519921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.531836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.531899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.543853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.543904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.555904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.555969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.567855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.567908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.579843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.579895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.591842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.591902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.603965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.604030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.623873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.623923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.635872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.635922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.647905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.647954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-07-11 05:58:37.659904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-07-11 05:58:37.659952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.671910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.671959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.683923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.683971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.696122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.696172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.707950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.708014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.719968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.720016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.732052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.732089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.743985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.744074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.756024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.756113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.768017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.768093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.780029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.780088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.792025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.792081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.804014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.804060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.816102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.816153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.828039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.828130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.840017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.840090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.852112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.852148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.864134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.864171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.876104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.876142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.009 [2024-07-11 05:58:37.888167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.009 [2024-07-11 05:58:37.888205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70491) - No such process 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 70491 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.010 delay0 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.010 05:58:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:22.269 [2024-07-11 05:58:38.133408] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:28.835 Initializing NVMe Controllers 00:12:28.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:28.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:28.835 Initialization complete. Launching workers. 00:12:28.835 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 94 00:12:28.835 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 381, failed to submit 33 00:12:28.835 success 261, unsuccess 120, failed 0 00:12:28.835 05:58:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.836 rmmod nvme_tcp 00:12:28.836 rmmod nvme_fabrics 00:12:28.836 rmmod nvme_keyring 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 70329 ']' 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 70329 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 70329 ']' 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 70329 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70329 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:28.836 killing process with pid 70329 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70329' 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 70329 00:12:28.836 05:58:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 70329 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:29.771 00:12:29.771 real 0m27.988s 00:12:29.771 user 0m46.356s 00:12:29.771 sys 0m6.950s 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.771 ************************************ 00:12:29.771 END TEST nvmf_zcopy 00:12:29.771 05:58:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 ************************************ 00:12:29.771 05:58:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:29.771 05:58:45 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:29.771 05:58:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:29.771 05:58:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.771 05:58:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 ************************************ 00:12:29.771 START TEST nvmf_nmic 00:12:29.771 ************************************ 00:12:29.771 05:58:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:29.771 * Looking for test storage... 00:12:30.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.030 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:30.031 Cannot find device "nvmf_tgt_br" 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:30.031 Cannot find device "nvmf_tgt_br2" 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:30.031 Cannot find device "nvmf_tgt_br" 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:30.031 Cannot find device "nvmf_tgt_br2" 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:30.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:30.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:30.031 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:30.290 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:30.290 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:30.290 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:30.290 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:30.290 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:30.290 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:30.290 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:30.290 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:30.290 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:30.290 05:58:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:30.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:12:30.290 00:12:30.290 --- 10.0.0.2 ping statistics --- 00:12:30.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.290 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:30.290 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:30.290 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:12:30.290 00:12:30.290 --- 10.0.0.3 ping statistics --- 00:12:30.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.290 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:30.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:30.290 00:12:30.290 --- 10.0.0.1 ping statistics --- 00:12:30.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.290 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=70829 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 70829 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 70829 ']' 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.290 05:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:30.290 [2024-07-11 05:58:46.189810] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:12:30.290 [2024-07-11 05:58:46.190559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.549 [2024-07-11 05:58:46.369958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.808 [2024-07-11 05:58:46.613956] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.808 [2024-07-11 05:58:46.614046] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.808 [2024-07-11 05:58:46.614087] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.808 [2024-07-11 05:58:46.614105] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.808 [2024-07-11 05:58:46.614120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.808 [2024-07-11 05:58:46.614973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.808 [2024-07-11 05:58:46.615205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.808 [2024-07-11 05:58:46.615518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.808 [2024-07-11 05:58:46.615546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.067 [2024-07-11 05:58:46.812968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.326 [2024-07-11 05:58:47.150413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.326 Malloc0 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.326 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.585 [2024-07-11 05:58:47.256903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.585 test case1: single bdev can't be used in multiple subsystems 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.585 [2024-07-11 05:58:47.284635] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:31.585 [2024-07-11 05:58:47.284934] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:31.585 [2024-07-11 05:58:47.285131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.585 request: 00:12:31.585 { 00:12:31.585 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:31.585 "namespace": { 00:12:31.585 "bdev_name": "Malloc0", 00:12:31.585 "no_auto_visible": false 00:12:31.585 }, 00:12:31.585 "method": "nvmf_subsystem_add_ns", 00:12:31.585 "req_id": 1 00:12:31.585 } 00:12:31.585 Got JSON-RPC error response 00:12:31.585 response: 00:12:31.585 { 00:12:31.585 "code": -32602, 00:12:31.585 "message": "Invalid parameters" 00:12:31.585 } 00:12:31.585 Adding namespace failed - expected result. 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:31.585 test case2: host connect to nvmf target in multiple paths 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:31.585 [2024-07-11 05:58:47.296866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:31.585 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.586 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.586 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:31.844 05:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.844 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:31.844 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.844 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:31.844 05:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:33.746 05:58:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:33.746 05:58:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.746 05:58:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:33.746 05:58:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:33.746 05:58:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.746 05:58:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:33.746 05:58:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:33.746 [global] 00:12:33.746 thread=1 00:12:33.746 invalidate=1 00:12:33.746 rw=write 00:12:33.746 time_based=1 00:12:33.746 runtime=1 00:12:33.746 ioengine=libaio 00:12:33.746 direct=1 00:12:33.746 bs=4096 00:12:33.746 iodepth=1 00:12:33.746 norandommap=0 00:12:33.746 numjobs=1 00:12:33.746 00:12:33.746 verify_dump=1 00:12:33.746 verify_backlog=512 00:12:33.746 verify_state_save=0 00:12:33.746 do_verify=1 00:12:33.746 verify=crc32c-intel 00:12:33.746 [job0] 00:12:33.746 filename=/dev/nvme0n1 00:12:33.746 Could not set queue depth (nvme0n1) 00:12:34.004 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.004 fio-3.35 00:12:34.004 Starting 1 thread 00:12:34.939 00:12:34.939 job0: (groupid=0, jobs=1): err= 0: pid=70926: Thu Jul 11 05:58:50 2024 00:12:34.939 read: IOPS=2233, BW=8935KiB/s (9150kB/s)(8944KiB/1001msec) 00:12:34.939 slat (usec): min=11, max=115, avg=15.85, stdev= 7.67 00:12:34.939 clat (usec): min=166, max=1594, avg=222.87, stdev=46.36 00:12:34.939 lat (usec): min=179, max=1670, avg=238.73, stdev=50.17 00:12:34.939 clat percentiles (usec): 00:12:34.939 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:12:34.939 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:12:34.939 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 265], 95.00th=[ 285], 00:12:34.939 | 99.00th=[ 351], 99.50th=[ 371], 99.90th=[ 416], 99.95th=[ 685], 00:12:34.939 | 99.99th=[ 1598] 00:12:34.939 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:34.939 slat (usec): min=15, max=125, avg=25.21, stdev=13.39 00:12:34.939 clat (usec): min=106, max=325, avg=153.26, stdev=39.07 00:12:34.939 lat (usec): min=123, max=395, avg=178.47, stdev=49.22 00:12:34.939 clat percentiles (usec): 00:12:34.939 | 1.00th=[ 112], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 124], 00:12:34.939 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 149], 00:12:34.939 | 70.00th=[ 159], 80.00th=[ 180], 90.00th=[ 215], 95.00th=[ 239], 00:12:34.939 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 318], 99.95th=[ 318], 00:12:34.939 | 99.99th=[ 326] 00:12:34.939 bw ( KiB/s): min= 9208, max= 9208, per=90.01%, avg=9208.00, stdev= 0.00, samples=1 00:12:34.939 iops : min= 2302, max= 2302, avg=2302.00, stdev= 0.00, samples=1 00:12:34.939 lat (usec) : 250=90.35%, 500=9.61%, 750=0.02% 00:12:34.939 lat (msec) : 2=0.02% 00:12:34.939 cpu : usr=1.40%, sys=8.30%, ctx=4796, majf=0, minf=2 00:12:34.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.940 issued rwts: total=2236,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.940 00:12:34.940 Run status group 0 (all jobs): 00:12:34.940 READ: bw=8935KiB/s (9150kB/s), 8935KiB/s-8935KiB/s (9150kB/s-9150kB/s), io=8944KiB (9159kB), run=1001-1001msec 00:12:34.940 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:34.940 00:12:34.940 Disk stats (read/write): 00:12:34.940 nvme0n1: ios=2098/2190, merge=0/0, ticks=478/380, in_queue=858, util=91.58% 00:12:34.940 05:58:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.198 05:58:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.198 rmmod nvme_tcp 00:12:35.198 rmmod nvme_fabrics 00:12:35.198 rmmod nvme_keyring 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 70829 ']' 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 70829 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 70829 ']' 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 70829 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70829 00:12:35.198 killing process with pid 70829 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70829' 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 70829 00:12:35.198 05:58:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 70829 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:36.611 00:12:36.611 real 0m6.704s 00:12:36.611 user 0m20.178s 00:12:36.611 sys 0m2.384s 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.611 ************************************ 00:12:36.611 END TEST nvmf_nmic 00:12:36.611 ************************************ 00:12:36.611 05:58:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:36.611 05:58:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:36.611 05:58:52 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:36.611 05:58:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:36.611 05:58:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.611 05:58:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.611 ************************************ 00:12:36.611 START TEST nvmf_fio_target 00:12:36.611 ************************************ 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:36.611 * Looking for test storage... 00:12:36.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.611 05:58:52 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:36.612 Cannot find device "nvmf_tgt_br" 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.612 Cannot find device "nvmf_tgt_br2" 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:36.612 Cannot find device "nvmf_tgt_br" 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:12:36.612 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:36.871 Cannot find device "nvmf_tgt_br2" 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:36.871 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:37.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:12:37.130 00:12:37.130 --- 10.0.0.2 ping statistics --- 00:12:37.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.130 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:37.130 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:37.130 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:37.130 00:12:37.130 --- 10.0.0.3 ping statistics --- 00:12:37.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.130 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:37.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:37.130 00:12:37.130 --- 10.0.0.1 ping statistics --- 00:12:37.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.130 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=71111 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 71111 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 71111 ']' 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.130 05:58:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.130 [2024-07-11 05:58:52.966964] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:12:37.130 [2024-07-11 05:58:52.967123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.389 [2024-07-11 05:58:53.140982] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.648 [2024-07-11 05:58:53.374237] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.648 [2024-07-11 05:58:53.374311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.648 [2024-07-11 05:58:53.374333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.648 [2024-07-11 05:58:53.374350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.648 [2024-07-11 05:58:53.374364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.648 [2024-07-11 05:58:53.374567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.648 [2024-07-11 05:58:53.375392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.648 [2024-07-11 05:58:53.375568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.648 [2024-07-11 05:58:53.375669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.648 [2024-07-11 05:58:53.558921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:38.214 05:58:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.214 05:58:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:12:38.214 05:58:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.214 05:58:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:38.214 05:58:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.214 05:58:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.214 05:58:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:38.472 [2024-07-11 05:58:54.185775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.472 05:58:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:38.731 05:58:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:38.731 05:58:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:38.991 05:58:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:38.991 05:58:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:39.250 05:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:39.250 05:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:39.510 05:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:39.510 05:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:39.769 05:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:40.028 05:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:40.028 05:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:40.596 05:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:40.596 05:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:40.596 05:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:40.596 05:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:40.855 05:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.114 05:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:41.114 05:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:41.373 05:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:41.373 05:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.632 05:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.891 [2024-07-11 05:58:57.553318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.891 05:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:41.891 05:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:42.149 05:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.408 05:58:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:42.408 05:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:42.409 05:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.409 05:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:42.409 05:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:42.409 05:58:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:44.313 05:59:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:44.313 05:59:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:44.313 05:59:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.313 05:59:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:44.313 05:59:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.313 05:59:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:44.313 05:59:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:44.313 [global] 00:12:44.313 thread=1 00:12:44.313 invalidate=1 00:12:44.313 rw=write 00:12:44.313 time_based=1 00:12:44.313 runtime=1 00:12:44.313 ioengine=libaio 00:12:44.313 direct=1 00:12:44.313 bs=4096 00:12:44.313 iodepth=1 00:12:44.313 norandommap=0 00:12:44.313 numjobs=1 00:12:44.313 00:12:44.313 verify_dump=1 00:12:44.313 verify_backlog=512 00:12:44.313 verify_state_save=0 00:12:44.313 do_verify=1 00:12:44.313 verify=crc32c-intel 00:12:44.313 [job0] 00:12:44.313 filename=/dev/nvme0n1 00:12:44.313 [job1] 00:12:44.313 filename=/dev/nvme0n2 00:12:44.313 [job2] 00:12:44.313 filename=/dev/nvme0n3 00:12:44.313 [job3] 00:12:44.313 filename=/dev/nvme0n4 00:12:44.313 Could not set queue depth (nvme0n1) 00:12:44.313 Could not set queue depth (nvme0n2) 00:12:44.313 Could not set queue depth (nvme0n3) 00:12:44.313 Could not set queue depth (nvme0n4) 00:12:44.572 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:44.572 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:44.572 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:44.572 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:44.572 fio-3.35 00:12:44.572 Starting 4 threads 00:12:45.951 00:12:45.951 job0: (groupid=0, jobs=1): err= 0: pid=71295: Thu Jul 11 05:59:01 2024 00:12:45.951 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:45.951 slat (nsec): min=8732, max=46235, avg=14683.61, stdev=3626.11 00:12:45.951 clat (usec): min=162, max=1867, avg=239.53, stdev=64.06 00:12:45.951 lat (usec): min=175, max=1883, avg=254.21, stdev=63.54 00:12:45.951 clat percentiles (usec): 00:12:45.951 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:12:45.951 | 30.00th=[ 194], 40.00th=[ 204], 50.00th=[ 239], 60.00th=[ 260], 00:12:45.951 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 330], 00:12:45.951 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 437], 99.95th=[ 510], 00:12:45.951 | 99.99th=[ 1860] 00:12:45.951 write: IOPS=2424, BW=9698KiB/s (9931kB/s)(9708KiB/1001msec); 0 zone resets 00:12:45.951 slat (nsec): min=10772, max=87408, avg=21402.96, stdev=6119.78 00:12:45.951 clat (usec): min=91, max=804, avg=172.97, stdev=42.00 00:12:45.951 lat (usec): min=130, max=828, avg=194.37, stdev=40.72 00:12:45.951 clat percentiles (usec): 00:12:45.951 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 137], 00:12:45.951 | 30.00th=[ 141], 40.00th=[ 149], 50.00th=[ 159], 60.00th=[ 182], 00:12:45.951 | 70.00th=[ 198], 80.00th=[ 212], 90.00th=[ 229], 95.00th=[ 243], 00:12:45.951 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 408], 99.95th=[ 412], 00:12:45.951 | 99.99th=[ 807] 00:12:45.951 bw ( KiB/s): min=12288, max=12288, per=30.43%, avg=12288.00, stdev= 0.00, samples=1 00:12:45.951 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:45.951 lat (usec) : 100=0.02%, 250=77.18%, 500=22.73%, 750=0.02%, 1000=0.02% 00:12:45.951 lat (msec) : 2=0.02% 00:12:45.951 cpu : usr=1.10%, sys=7.10%, ctx=4478, majf=0, minf=6 00:12:45.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.951 issued rwts: total=2048,2427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.951 job1: (groupid=0, jobs=1): err= 0: pid=71296: Thu Jul 11 05:59:01 2024 00:12:45.951 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:45.951 slat (nsec): min=8337, max=58517, avg=15878.67, stdev=5819.06 00:12:45.951 clat (usec): min=159, max=427, avg=237.62, stdev=55.11 00:12:45.951 lat (usec): min=174, max=444, avg=253.50, stdev=52.50 00:12:45.951 clat percentiles (usec): 00:12:45.951 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 184], 00:12:45.951 | 30.00th=[ 190], 40.00th=[ 200], 50.00th=[ 241], 60.00th=[ 262], 00:12:45.951 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 330], 00:12:45.951 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 392], 99.95th=[ 404], 00:12:45.951 | 99.99th=[ 429] 00:12:45.951 write: IOPS=2394, BW=9578KiB/s (9808kB/s)(9588KiB/1001msec); 0 zone resets 00:12:45.951 slat (nsec): min=10746, max=99349, avg=23149.32, stdev=8282.14 00:12:45.951 clat (usec): min=109, max=4167, avg=173.97, stdev=115.98 00:12:45.951 lat (usec): min=127, max=4184, avg=197.12, stdev=115.28 00:12:45.951 clat percentiles (usec): 00:12:45.951 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 133], 00:12:45.951 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 155], 60.00th=[ 178], 00:12:45.951 | 70.00th=[ 198], 80.00th=[ 215], 90.00th=[ 233], 95.00th=[ 247], 00:12:45.951 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 930], 99.95th=[ 3523], 00:12:45.951 | 99.99th=[ 4178] 00:12:45.951 bw ( KiB/s): min=12288, max=12288, per=30.43%, avg=12288.00, stdev= 0.00, samples=1 00:12:45.951 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:45.951 lat (usec) : 250=76.27%, 500=23.67%, 1000=0.02% 00:12:45.951 lat (msec) : 4=0.02%, 10=0.02% 00:12:45.951 cpu : usr=1.30%, sys=7.50%, ctx=4447, majf=0, minf=5 00:12:45.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.951 issued rwts: total=2048,2397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.951 job2: (groupid=0, jobs=1): err= 0: pid=71297: Thu Jul 11 05:59:01 2024 00:12:45.951 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:45.951 slat (nsec): min=11158, max=47480, avg=14167.90, stdev=3517.11 00:12:45.951 clat (usec): min=166, max=550, avg=197.90, stdev=21.75 00:12:45.951 lat (usec): min=179, max=564, avg=212.07, stdev=22.56 00:12:45.951 clat percentiles (usec): 00:12:45.951 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 182], 00:12:45.951 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:12:45.951 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 231], 00:12:45.951 | 99.00th=[ 273], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 396], 00:12:45.951 | 99.99th=[ 553] 00:12:45.951 write: IOPS=2717, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec); 0 zone resets 00:12:45.951 slat (nsec): min=13509, max=67538, avg=20607.56, stdev=4993.56 00:12:45.951 clat (usec): min=113, max=1097, avg=144.22, stdev=30.46 00:12:45.951 lat (usec): min=131, max=1135, avg=164.83, stdev=31.68 00:12:45.951 clat percentiles (usec): 00:12:45.951 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 129], 00:12:45.951 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:12:45.951 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 176], 00:12:45.951 | 99.00th=[ 196], 99.50th=[ 208], 99.90th=[ 660], 99.95th=[ 685], 00:12:45.951 | 99.99th=[ 1106] 00:12:45.951 bw ( KiB/s): min=12288, max=12288, per=30.43%, avg=12288.00, stdev= 0.00, samples=1 00:12:45.951 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:45.951 lat (usec) : 250=99.11%, 500=0.81%, 750=0.06% 00:12:45.951 lat (msec) : 2=0.02% 00:12:45.951 cpu : usr=2.20%, sys=6.90%, ctx=5281, majf=0, minf=11 00:12:45.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.951 issued rwts: total=2560,2720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.951 job3: (groupid=0, jobs=1): err= 0: pid=71298: Thu Jul 11 05:59:01 2024 00:12:45.951 read: IOPS=2455, BW=9822KiB/s (10.1MB/s)(9832KiB/1001msec) 00:12:45.951 slat (nsec): min=10500, max=51032, avg=14724.65, stdev=5000.29 00:12:45.951 clat (usec): min=175, max=7968, avg=210.96, stdev=237.02 00:12:45.951 lat (usec): min=188, max=8019, avg=225.69, stdev=237.80 00:12:45.951 clat percentiles (usec): 00:12:45.951 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 188], 00:12:45.951 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:12:45.951 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 235], 00:12:45.951 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 3523], 99.95th=[ 7963], 00:12:45.951 | 99.99th=[ 7963] 00:12:45.951 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:45.951 slat (nsec): min=14982, max=77098, avg=22769.27, stdev=8489.20 00:12:45.951 clat (usec): min=120, max=3656, avg=147.62, stdev=78.89 00:12:45.951 lat (usec): min=137, max=3703, avg=170.39, stdev=80.22 00:12:45.951 clat percentiles (usec): 00:12:45.951 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 133], 00:12:45.951 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:12:45.951 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 178], 00:12:45.951 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 233], 99.95th=[ 1860], 00:12:45.951 | 99.99th=[ 3654] 00:12:45.951 bw ( KiB/s): min=11000, max=11000, per=27.24%, avg=11000.00, stdev= 0.00, samples=1 00:12:45.951 iops : min= 2750, max= 2750, avg=2750.00, stdev= 0.00, samples=1 00:12:45.951 lat (usec) : 250=99.30%, 500=0.54%, 750=0.02%, 1000=0.02% 00:12:45.951 lat (msec) : 2=0.02%, 4=0.06%, 10=0.04% 00:12:45.951 cpu : usr=2.20%, sys=7.10%, ctx=5019, majf=0, minf=13 00:12:45.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.951 issued rwts: total=2458,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.951 00:12:45.951 Run status group 0 (all jobs): 00:12:45.951 READ: bw=35.6MiB/s (37.3MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=35.6MiB (37.3MB), run=1001-1001msec 00:12:45.951 WRITE: bw=39.4MiB/s (41.3MB/s), 9578KiB/s-10.6MiB/s (9808kB/s-11.1MB/s), io=39.5MiB (41.4MB), run=1001-1001msec 00:12:45.951 00:12:45.951 Disk stats (read/write): 00:12:45.951 nvme0n1: ios=1901/2048, merge=0/0, ticks=464/353, in_queue=817, util=87.07% 00:12:45.951 nvme0n2: ios=1878/2048, merge=0/0, ticks=468/356, in_queue=824, util=88.64% 00:12:45.951 nvme0n3: ios=2048/2497, merge=0/0, ticks=412/381, in_queue=793, util=89.13% 00:12:45.951 nvme0n4: ios=2048/2202, merge=0/0, ticks=430/346, in_queue=776, util=88.54% 00:12:45.951 05:59:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:45.951 [global] 00:12:45.951 thread=1 00:12:45.951 invalidate=1 00:12:45.951 rw=randwrite 00:12:45.951 time_based=1 00:12:45.951 runtime=1 00:12:45.951 ioengine=libaio 00:12:45.951 direct=1 00:12:45.951 bs=4096 00:12:45.951 iodepth=1 00:12:45.951 norandommap=0 00:12:45.951 numjobs=1 00:12:45.951 00:12:45.951 verify_dump=1 00:12:45.951 verify_backlog=512 00:12:45.951 verify_state_save=0 00:12:45.951 do_verify=1 00:12:45.951 verify=crc32c-intel 00:12:45.951 [job0] 00:12:45.951 filename=/dev/nvme0n1 00:12:45.951 [job1] 00:12:45.951 filename=/dev/nvme0n2 00:12:45.951 [job2] 00:12:45.952 filename=/dev/nvme0n3 00:12:45.952 [job3] 00:12:45.952 filename=/dev/nvme0n4 00:12:45.952 Could not set queue depth (nvme0n1) 00:12:45.952 Could not set queue depth (nvme0n2) 00:12:45.952 Could not set queue depth (nvme0n3) 00:12:45.952 Could not set queue depth (nvme0n4) 00:12:45.952 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:45.952 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:45.952 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:45.952 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:45.952 fio-3.35 00:12:45.952 Starting 4 threads 00:12:47.330 00:12:47.330 job0: (groupid=0, jobs=1): err= 0: pid=71357: Thu Jul 11 05:59:02 2024 00:12:47.330 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:47.330 slat (nsec): min=8602, max=62041, avg=14641.21, stdev=4798.45 00:12:47.330 clat (usec): min=236, max=811, avg=298.04, stdev=31.77 00:12:47.330 lat (usec): min=248, max=827, avg=312.68, stdev=32.00 00:12:47.330 clat percentiles (usec): 00:12:47.330 | 1.00th=[ 253], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:12:47.330 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:12:47.330 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 338], 00:12:47.330 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 701], 99.95th=[ 816], 00:12:47.330 | 99.99th=[ 816] 00:12:47.330 write: IOPS=1989, BW=7956KiB/s (8147kB/s)(7964KiB/1001msec); 0 zone resets 00:12:47.330 slat (nsec): min=10040, max=76805, avg=21099.56, stdev=7279.29 00:12:47.330 clat (usec): min=180, max=368, avg=236.93, stdev=25.33 00:12:47.330 lat (usec): min=198, max=404, avg=258.03, stdev=26.62 00:12:47.330 clat percentiles (usec): 00:12:47.330 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 215], 00:12:47.330 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:12:47.330 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:12:47.330 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 322], 99.95th=[ 371], 00:12:47.330 | 99.99th=[ 371] 00:12:47.330 bw ( KiB/s): min= 8192, max= 8192, per=21.68%, avg=8192.00, stdev= 0.00, samples=1 00:12:47.330 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:47.330 lat (usec) : 250=39.69%, 500=60.22%, 750=0.06%, 1000=0.03% 00:12:47.330 cpu : usr=1.60%, sys=5.30%, ctx=3527, majf=0, minf=7 00:12:47.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:47.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.330 issued rwts: total=1536,1991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:47.330 job1: (groupid=0, jobs=1): err= 0: pid=71358: Thu Jul 11 05:59:02 2024 00:12:47.330 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec) 00:12:47.330 slat (nsec): min=10521, max=51760, avg=14171.78, stdev=4623.09 00:12:47.330 clat (usec): min=160, max=571, avg=195.30, stdev=21.48 00:12:47.330 lat (usec): min=172, max=584, avg=209.47, stdev=22.43 00:12:47.330 clat percentiles (usec): 00:12:47.330 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:12:47.330 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:12:47.330 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 231], 00:12:47.330 | 99.00th=[ 245], 99.50th=[ 253], 99.90th=[ 326], 99.95th=[ 478], 00:12:47.330 | 99.99th=[ 570] 00:12:47.330 write: IOPS=2912, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:12:47.330 slat (nsec): min=15277, max=67697, avg=20155.23, stdev=5598.88 00:12:47.330 clat (usec): min=107, max=218, avg=135.68, stdev=17.43 00:12:47.330 lat (usec): min=123, max=283, avg=155.84, stdev=19.31 00:12:47.330 clat percentiles (usec): 00:12:47.330 | 1.00th=[ 112], 5.00th=[ 116], 10.00th=[ 118], 20.00th=[ 121], 00:12:47.330 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 137], 00:12:47.330 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 161], 95.00th=[ 172], 00:12:47.330 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 208], 99.95th=[ 215], 00:12:47.330 | 99.99th=[ 219] 00:12:47.330 bw ( KiB/s): min=12288, max=12288, per=32.51%, avg=12288.00, stdev= 0.00, samples=1 00:12:47.330 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:47.330 lat (usec) : 250=99.69%, 500=0.29%, 750=0.02% 00:12:47.330 cpu : usr=2.70%, sys=6.60%, ctx=5476, majf=0, minf=7 00:12:47.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:47.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.330 issued rwts: total=2560,2915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:47.330 job2: (groupid=0, jobs=1): err= 0: pid=71359: Thu Jul 11 05:59:02 2024 00:12:47.330 read: IOPS=2518, BW=9.84MiB/s (10.3MB/s)(9.85MiB/1001msec) 00:12:47.330 slat (nsec): min=10838, max=57221, avg=14249.88, stdev=3892.97 00:12:47.330 clat (usec): min=165, max=2201, avg=204.74, stdev=48.18 00:12:47.330 lat (usec): min=179, max=2218, avg=218.99, stdev=48.62 00:12:47.330 clat percentiles (usec): 00:12:47.330 | 1.00th=[ 174], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:12:47.330 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:12:47.330 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 243], 00:12:47.330 | 99.00th=[ 269], 99.50th=[ 347], 99.90th=[ 570], 99.95th=[ 594], 00:12:47.330 | 99.99th=[ 2212] 00:12:47.330 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:47.330 slat (nsec): min=13402, max=66376, avg=21451.40, stdev=5793.08 00:12:47.330 clat (usec): min=115, max=785, avg=150.18, stdev=25.87 00:12:47.330 lat (usec): min=132, max=816, avg=171.63, stdev=27.57 00:12:47.330 clat percentiles (usec): 00:12:47.330 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 131], 00:12:47.330 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:12:47.330 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 188], 00:12:47.330 | 99.00th=[ 221], 99.50th=[ 251], 99.90th=[ 322], 99.95th=[ 433], 00:12:47.330 | 99.99th=[ 783] 00:12:47.330 bw ( KiB/s): min=12096, max=12096, per=32.00%, avg=12096.00, stdev= 0.00, samples=1 00:12:47.330 iops : min= 3024, max= 3024, avg=3024.00, stdev= 0.00, samples=1 00:12:47.330 lat (usec) : 250=98.33%, 500=1.57%, 750=0.06%, 1000=0.02% 00:12:47.330 lat (msec) : 4=0.02% 00:12:47.330 cpu : usr=1.60%, sys=7.30%, ctx=5081, majf=0, minf=21 00:12:47.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:47.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.330 issued rwts: total=2521,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:47.330 job3: (groupid=0, jobs=1): err= 0: pid=71360: Thu Jul 11 05:59:02 2024 00:12:47.330 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:47.330 slat (nsec): min=8189, max=41390, avg=10778.25, stdev=3868.33 00:12:47.331 clat (usec): min=242, max=754, avg=302.27, stdev=30.79 00:12:47.331 lat (usec): min=254, max=765, avg=313.04, stdev=31.06 00:12:47.331 clat percentiles (usec): 00:12:47.331 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:12:47.331 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:12:47.331 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 347], 00:12:47.331 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 652], 99.95th=[ 758], 00:12:47.331 | 99.99th=[ 758] 00:12:47.331 write: IOPS=1990, BW=7960KiB/s (8151kB/s)(7968KiB/1001msec); 0 zone resets 00:12:47.331 slat (nsec): min=10438, max=76513, avg=17899.56, stdev=6605.22 00:12:47.331 clat (usec): min=123, max=388, avg=240.28, stdev=26.59 00:12:47.331 lat (usec): min=152, max=399, avg=258.18, stdev=27.13 00:12:47.331 clat percentiles (usec): 00:12:47.331 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 217], 00:12:47.331 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:12:47.331 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:12:47.331 | 99.00th=[ 310], 99.50th=[ 314], 99.90th=[ 347], 99.95th=[ 388], 00:12:47.331 | 99.99th=[ 388] 00:12:47.331 bw ( KiB/s): min= 8192, max= 8192, per=21.68%, avg=8192.00, stdev= 0.00, samples=1 00:12:47.331 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:47.331 lat (usec) : 250=37.70%, 500=62.22%, 750=0.06%, 1000=0.03% 00:12:47.331 cpu : usr=1.50%, sys=4.00%, ctx=3528, majf=0, minf=10 00:12:47.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:47.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.331 issued rwts: total=1536,1992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:47.331 00:12:47.331 Run status group 0 (all jobs): 00:12:47.331 READ: bw=31.8MiB/s (33.4MB/s), 6138KiB/s-10.0MiB/s (6285kB/s-10.5MB/s), io=31.8MiB (33.4MB), run=1000-1001msec 00:12:47.331 WRITE: bw=36.9MiB/s (38.7MB/s), 7956KiB/s-11.4MiB/s (8147kB/s-11.9MB/s), io=36.9MiB (38.7MB), run=1001-1001msec 00:12:47.331 00:12:47.331 Disk stats (read/write): 00:12:47.331 nvme0n1: ios=1545/1536, merge=0/0, ticks=484/370, in_queue=854, util=89.17% 00:12:47.331 nvme0n2: ios=2260/2560, merge=0/0, ticks=479/384, in_queue=863, util=90.19% 00:12:47.331 nvme0n3: ios=2048/2392, merge=0/0, ticks=440/401, in_queue=841, util=89.44% 00:12:47.331 nvme0n4: ios=1502/1536, merge=0/0, ticks=445/351, in_queue=796, util=90.00% 00:12:47.331 05:59:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:47.331 [global] 00:12:47.331 thread=1 00:12:47.331 invalidate=1 00:12:47.331 rw=write 00:12:47.331 time_based=1 00:12:47.331 runtime=1 00:12:47.331 ioengine=libaio 00:12:47.331 direct=1 00:12:47.331 bs=4096 00:12:47.331 iodepth=128 00:12:47.331 norandommap=0 00:12:47.331 numjobs=1 00:12:47.331 00:12:47.331 verify_dump=1 00:12:47.331 verify_backlog=512 00:12:47.331 verify_state_save=0 00:12:47.331 do_verify=1 00:12:47.331 verify=crc32c-intel 00:12:47.331 [job0] 00:12:47.331 filename=/dev/nvme0n1 00:12:47.331 [job1] 00:12:47.331 filename=/dev/nvme0n2 00:12:47.331 [job2] 00:12:47.331 filename=/dev/nvme0n3 00:12:47.331 [job3] 00:12:47.331 filename=/dev/nvme0n4 00:12:47.331 Could not set queue depth (nvme0n1) 00:12:47.331 Could not set queue depth (nvme0n2) 00:12:47.331 Could not set queue depth (nvme0n3) 00:12:47.331 Could not set queue depth (nvme0n4) 00:12:47.331 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:47.331 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:47.331 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:47.331 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:47.331 fio-3.35 00:12:47.331 Starting 4 threads 00:12:48.706 00:12:48.706 job0: (groupid=0, jobs=1): err= 0: pid=71414: Thu Jul 11 05:59:04 2024 00:12:48.706 read: IOPS=2683, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1002msec) 00:12:48.706 slat (usec): min=5, max=5665, avg=171.52, stdev=868.97 00:12:48.706 clat (usec): min=691, max=24303, avg=21899.97, stdev=2469.33 00:12:48.706 lat (usec): min=5632, max=24316, avg=22071.49, stdev=2312.98 00:12:48.706 clat percentiles (usec): 00:12:48.706 | 1.00th=[ 6063], 5.00th=[17433], 10.00th=[21365], 20.00th=[21627], 00:12:48.706 | 30.00th=[21890], 40.00th=[22152], 50.00th=[22414], 60.00th=[22414], 00:12:48.706 | 70.00th=[22676], 80.00th=[22938], 90.00th=[23462], 95.00th=[23725], 00:12:48.706 | 99.00th=[24249], 99.50th=[24249], 99.90th=[24249], 99.95th=[24249], 00:12:48.706 | 99.99th=[24249] 00:12:48.706 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:12:48.706 slat (usec): min=11, max=5536, avg=168.36, stdev=818.00 00:12:48.706 clat (usec): min=15746, max=24389, avg=21882.70, stdev=1186.88 00:12:48.706 lat (usec): min=15891, max=24414, avg=22051.06, stdev=863.70 00:12:48.706 clat percentiles (usec): 00:12:48.706 | 1.00th=[16909], 5.00th=[20841], 10.00th=[20841], 20.00th=[21365], 00:12:48.706 | 30.00th=[21365], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:12:48.706 | 70.00th=[22414], 80.00th=[22414], 90.00th=[23200], 95.00th=[23987], 00:12:48.706 | 99.00th=[24249], 99.50th=[24249], 99.90th=[24249], 99.95th=[24511], 00:12:48.706 | 99.99th=[24511] 00:12:48.706 bw ( KiB/s): min=12288, max=12288, per=25.31%, avg=12288.00, stdev= 0.00, samples=2 00:12:48.706 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:12:48.706 lat (usec) : 750=0.02% 00:12:48.706 lat (msec) : 10=0.56%, 20=4.18%, 50=95.24% 00:12:48.706 cpu : usr=2.90%, sys=7.99%, ctx=181, majf=0, minf=17 00:12:48.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:48.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:48.707 issued rwts: total=2689,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:48.707 job1: (groupid=0, jobs=1): err= 0: pid=71415: Thu Jul 11 05:59:04 2024 00:12:48.707 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:12:48.707 slat (usec): min=6, max=6287, avg=168.70, stdev=727.49 00:12:48.707 clat (usec): min=14979, max=34332, avg=21856.00, stdev=4087.01 00:12:48.707 lat (usec): min=15004, max=34348, avg=22024.70, stdev=4148.26 00:12:48.707 clat percentiles (usec): 00:12:48.707 | 1.00th=[15270], 5.00th=[17957], 10.00th=[18220], 20.00th=[18482], 00:12:48.707 | 30.00th=[18744], 40.00th=[19268], 50.00th=[19792], 60.00th=[21103], 00:12:48.707 | 70.00th=[23987], 80.00th=[26870], 90.00th=[28181], 95.00th=[29492], 00:12:48.707 | 99.00th=[31589], 99.50th=[32637], 99.90th=[34341], 99.95th=[34341], 00:12:48.707 | 99.99th=[34341] 00:12:48.707 write: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1004msec); 0 zone resets 00:12:48.707 slat (usec): min=14, max=7274, avg=182.37, stdev=741.25 00:12:48.707 clat (usec): min=3627, max=59797, avg=23725.14, stdev=11948.51 00:12:48.707 lat (usec): min=3653, max=59821, avg=23907.51, stdev=12031.90 00:12:48.707 clat percentiles (usec): 00:12:48.707 | 1.00th=[ 8848], 5.00th=[12911], 10.00th=[13042], 20.00th=[15139], 00:12:48.707 | 30.00th=[15926], 40.00th=[17957], 50.00th=[18744], 60.00th=[19268], 00:12:48.707 | 70.00th=[25035], 80.00th=[34866], 90.00th=[44303], 95.00th=[50070], 00:12:48.707 | 99.00th=[56361], 99.50th=[57410], 99.90th=[60031], 99.95th=[60031], 00:12:48.707 | 99.99th=[60031] 00:12:48.707 bw ( KiB/s): min=10448, max=12312, per=23.44%, avg=11380.00, stdev=1318.05, samples=2 00:12:48.707 iops : min= 2612, max= 3078, avg=2845.00, stdev=329.51, samples=2 00:12:48.707 lat (msec) : 4=0.14%, 10=0.43%, 20=57.12%, 50=39.77%, 100=2.53% 00:12:48.707 cpu : usr=3.09%, sys=9.47%, ctx=285, majf=0, minf=7 00:12:48.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:48.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:48.707 issued rwts: total=2560,2969,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:48.707 job2: (groupid=0, jobs=1): err= 0: pid=71416: Thu Jul 11 05:59:04 2024 00:12:48.707 read: IOPS=2678, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1004msec) 00:12:48.707 slat (usec): min=5, max=5673, avg=171.22, stdev=863.96 00:12:48.707 clat (usec): min=945, max=24400, avg=21906.75, stdev=2519.87 00:12:48.707 lat (usec): min=5292, max=24416, avg=22077.97, stdev=2369.18 00:12:48.707 clat percentiles (usec): 00:12:48.707 | 1.00th=[ 5735], 5.00th=[17433], 10.00th=[21365], 20.00th=[21627], 00:12:48.707 | 30.00th=[22152], 40.00th=[22152], 50.00th=[22152], 60.00th=[22414], 00:12:48.707 | 70.00th=[22676], 80.00th=[22938], 90.00th=[23462], 95.00th=[23725], 00:12:48.707 | 99.00th=[24249], 99.50th=[24249], 99.90th=[24511], 99.95th=[24511], 00:12:48.707 | 99.99th=[24511] 00:12:48.707 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:12:48.707 slat (usec): min=11, max=5671, avg=169.18, stdev=818.71 00:12:48.707 clat (usec): min=15820, max=24662, avg=21932.23, stdev=1250.11 00:12:48.707 lat (usec): min=17144, max=24687, avg=22101.40, stdev=950.78 00:12:48.707 clat percentiles (usec): 00:12:48.707 | 1.00th=[16909], 5.00th=[20579], 10.00th=[20841], 20.00th=[21103], 00:12:48.707 | 30.00th=[21365], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:12:48.707 | 70.00th=[22414], 80.00th=[22938], 90.00th=[23462], 95.00th=[23987], 00:12:48.707 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:12:48.707 | 99.99th=[24773] 00:12:48.707 bw ( KiB/s): min=12288, max=12312, per=25.34%, avg=12300.00, stdev=16.97, samples=2 00:12:48.707 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:12:48.707 lat (usec) : 1000=0.02% 00:12:48.707 lat (msec) : 10=0.56%, 20=4.18%, 50=95.24% 00:12:48.707 cpu : usr=2.79%, sys=8.18%, ctx=182, majf=0, minf=6 00:12:48.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:48.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:48.707 issued rwts: total=2689,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:48.707 job3: (groupid=0, jobs=1): err= 0: pid=71417: Thu Jul 11 05:59:04 2024 00:12:48.707 read: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1003msec) 00:12:48.707 slat (usec): min=6, max=9318, avg=179.33, stdev=942.12 00:12:48.707 clat (usec): min=1677, max=38424, avg=22746.05, stdev=6418.88 00:12:48.707 lat (usec): min=5078, max=38450, avg=22925.38, stdev=6397.46 00:12:48.707 clat percentiles (usec): 00:12:48.707 | 1.00th=[ 5669], 5.00th=[14484], 10.00th=[16712], 20.00th=[17433], 00:12:48.707 | 30.00th=[18220], 40.00th=[19530], 50.00th=[23462], 60.00th=[24511], 00:12:48.707 | 70.00th=[25297], 80.00th=[25822], 90.00th=[31589], 95.00th=[38011], 00:12:48.707 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:12:48.707 | 99.99th=[38536] 00:12:48.707 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:12:48.707 slat (usec): min=11, max=8668, avg=145.02, stdev=695.69 00:12:48.707 clat (usec): min=11033, max=31274, avg=19319.19, stdev=4049.97 00:12:48.707 lat (usec): min=13376, max=31301, avg=19464.21, stdev=4011.01 00:12:48.707 clat percentiles (usec): 00:12:48.707 | 1.00th=[13435], 5.00th=[13829], 10.00th=[14091], 20.00th=[15664], 00:12:48.707 | 30.00th=[17171], 40.00th=[17957], 50.00th=[18482], 60.00th=[19268], 00:12:48.707 | 70.00th=[21103], 80.00th=[23987], 90.00th=[25035], 95.00th=[26346], 00:12:48.707 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:12:48.707 | 99.99th=[31327] 00:12:48.707 bw ( KiB/s): min=12288, max=12288, per=25.31%, avg=12288.00, stdev= 0.00, samples=2 00:12:48.707 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:12:48.707 lat (msec) : 2=0.02%, 10=0.95%, 20=51.50%, 50=47.53% 00:12:48.707 cpu : usr=3.49%, sys=9.08%, ctx=190, majf=0, minf=5 00:12:48.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:48.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:48.707 issued rwts: total=2945,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:48.707 00:12:48.707 Run status group 0 (all jobs): 00:12:48.707 READ: bw=42.3MiB/s (44.4MB/s), 9.96MiB/s-11.5MiB/s (10.4MB/s-12.0MB/s), io=42.5MiB (44.6MB), run=1002-1004msec 00:12:48.707 WRITE: bw=47.4MiB/s (49.7MB/s), 11.6MiB/s-12.0MiB/s (12.1MB/s-12.6MB/s), io=47.6MiB (49.9MB), run=1002-1004msec 00:12:48.707 00:12:48.707 Disk stats (read/write): 00:12:48.707 nvme0n1: ios=2418/2560, merge=0/0, ticks=11814/11918, in_queue=23732, util=87.96% 00:12:48.707 nvme0n2: ios=2496/2560, merge=0/0, ticks=17403/16005, in_queue=33408, util=89.26% 00:12:48.707 nvme0n3: ios=2368/2560, merge=0/0, ticks=11579/11866, in_queue=23445, util=89.14% 00:12:48.707 nvme0n4: ios=2400/2560, merge=0/0, ticks=13957/10844, in_queue=24801, util=89.91% 00:12:48.707 05:59:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:48.707 [global] 00:12:48.707 thread=1 00:12:48.707 invalidate=1 00:12:48.707 rw=randwrite 00:12:48.707 time_based=1 00:12:48.707 runtime=1 00:12:48.707 ioengine=libaio 00:12:48.707 direct=1 00:12:48.707 bs=4096 00:12:48.707 iodepth=128 00:12:48.707 norandommap=0 00:12:48.707 numjobs=1 00:12:48.707 00:12:48.707 verify_dump=1 00:12:48.707 verify_backlog=512 00:12:48.707 verify_state_save=0 00:12:48.707 do_verify=1 00:12:48.707 verify=crc32c-intel 00:12:48.707 [job0] 00:12:48.707 filename=/dev/nvme0n1 00:12:48.707 [job1] 00:12:48.707 filename=/dev/nvme0n2 00:12:48.707 [job2] 00:12:48.707 filename=/dev/nvme0n3 00:12:48.707 [job3] 00:12:48.707 filename=/dev/nvme0n4 00:12:48.707 Could not set queue depth (nvme0n1) 00:12:48.707 Could not set queue depth (nvme0n2) 00:12:48.707 Could not set queue depth (nvme0n3) 00:12:48.707 Could not set queue depth (nvme0n4) 00:12:48.707 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:48.707 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:48.707 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:48.707 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:48.707 fio-3.35 00:12:48.707 Starting 4 threads 00:12:50.085 00:12:50.085 job0: (groupid=0, jobs=1): err= 0: pid=71475: Thu Jul 11 05:59:05 2024 00:12:50.085 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:12:50.085 slat (usec): min=9, max=6329, avg=90.82, stdev=551.76 00:12:50.085 clat (usec): min=7601, max=21908, avg=12838.88, stdev=1419.91 00:12:50.085 lat (usec): min=7615, max=25599, avg=12929.70, stdev=1443.48 00:12:50.085 clat percentiles (usec): 00:12:50.085 | 1.00th=[ 8455], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:12:50.085 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:12:50.085 | 70.00th=[13173], 80.00th=[13566], 90.00th=[13829], 95.00th=[14353], 00:12:50.085 | 99.00th=[19530], 99.50th=[20317], 99.90th=[21890], 99.95th=[21890], 00:12:50.085 | 99.99th=[21890] 00:12:50.085 write: IOPS=5282, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1005msec); 0 zone resets 00:12:50.085 slat (usec): min=10, max=8304, avg=93.58, stdev=547.03 00:12:50.085 clat (usec): min=616, max=17636, avg=11605.99, stdev=1392.64 00:12:50.085 lat (usec): min=5031, max=17668, avg=11699.57, stdev=1313.92 00:12:50.085 clat percentiles (usec): 00:12:50.085 | 1.00th=[ 6128], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10945], 00:12:50.085 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:12:50.085 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12911], 95.00th=[13435], 00:12:50.085 | 99.00th=[15533], 99.50th=[15664], 99.90th=[16581], 99.95th=[16581], 00:12:50.085 | 99.99th=[17695] 00:12:50.085 bw ( KiB/s): min=20536, max=20976, per=33.60%, avg=20756.00, stdev=311.13, samples=2 00:12:50.085 iops : min= 5134, max= 5244, avg=5189.00, stdev=77.78, samples=2 00:12:50.085 lat (usec) : 750=0.01% 00:12:50.085 lat (msec) : 10=5.51%, 20=94.09%, 50=0.38% 00:12:50.085 cpu : usr=5.78%, sys=13.45%, ctx=222, majf=0, minf=1 00:12:50.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:50.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:50.085 issued rwts: total=5120,5309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:50.085 job1: (groupid=0, jobs=1): err= 0: pid=71476: Thu Jul 11 05:59:05 2024 00:12:50.085 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:50.085 slat (usec): min=5, max=16309, avg=244.93, stdev=1565.87 00:12:50.085 clat (usec): min=10669, max=67502, avg=31809.51, stdev=15292.46 00:12:50.085 lat (usec): min=13535, max=67531, avg=32054.44, stdev=15343.16 00:12:50.085 clat percentiles (usec): 00:12:50.085 | 1.00th=[13566], 5.00th=[13698], 10.00th=[13829], 20.00th=[13960], 00:12:50.085 | 30.00th=[25822], 40.00th=[26870], 50.00th=[27657], 60.00th=[32900], 00:12:50.085 | 70.00th=[36963], 80.00th=[40109], 90.00th=[56361], 95.00th=[64750], 00:12:50.085 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:12:50.085 | 99.99th=[67634] 00:12:50.085 write: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(9.88MiB/1001msec); 0 zone resets 00:12:50.085 slat (usec): min=13, max=15046, avg=188.14, stdev=1062.43 00:12:50.085 clat (usec): min=271, max=53116, avg=23824.23, stdev=10285.68 00:12:50.085 lat (usec): min=313, max=53200, avg=24012.38, stdev=10290.50 00:12:50.085 clat percentiles (usec): 00:12:50.085 | 1.00th=[ 3228], 5.00th=[12387], 10.00th=[12780], 20.00th=[13566], 00:12:50.085 | 30.00th=[15401], 40.00th=[19792], 50.00th=[23987], 60.00th=[25035], 00:12:50.085 | 70.00th=[26608], 80.00th=[33817], 90.00th=[39584], 95.00th=[41681], 00:12:50.085 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:12:50.085 | 99.99th=[53216] 00:12:50.085 bw ( KiB/s): min= 8208, max= 8208, per=13.29%, avg=8208.00, stdev= 0.00, samples=1 00:12:50.085 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:12:50.085 lat (usec) : 500=0.04% 00:12:50.085 lat (msec) : 4=0.70%, 10=1.40%, 20=31.76%, 50=57.91%, 100=8.19% 00:12:50.085 cpu : usr=2.20%, sys=8.50%, ctx=146, majf=0, minf=10 00:12:50.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:12:50.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:50.085 issued rwts: total=2048,2530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:50.085 job2: (groupid=0, jobs=1): err= 0: pid=71477: Thu Jul 11 05:59:05 2024 00:12:50.085 read: IOPS=3049, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:12:50.085 slat (usec): min=8, max=21625, avg=177.04, stdev=1252.06 00:12:50.085 clat (usec): min=1734, max=45998, avg=24570.43, stdev=6681.76 00:12:50.085 lat (usec): min=5222, max=47705, avg=24747.47, stdev=6736.02 00:12:50.085 clat percentiles (usec): 00:12:50.085 | 1.00th=[ 5800], 5.00th=[14615], 10.00th=[15139], 20.00th=[18744], 00:12:50.085 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23462], 60.00th=[25822], 00:12:50.085 | 70.00th=[29754], 80.00th=[31851], 90.00th=[32637], 95.00th=[33424], 00:12:50.085 | 99.00th=[36963], 99.50th=[37487], 99.90th=[44303], 99.95th=[45351], 00:12:50.085 | 99.99th=[45876] 00:12:50.085 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:12:50.085 slat (usec): min=7, max=15764, avg=138.45, stdev=895.99 00:12:50.085 clat (usec): min=514, max=32400, avg=16953.37, stdev=3275.96 00:12:50.085 lat (usec): min=562, max=32448, avg=17091.82, stdev=3203.73 00:12:50.085 clat percentiles (usec): 00:12:50.085 | 1.00th=[ 6849], 5.00th=[13566], 10.00th=[14353], 20.00th=[15270], 00:12:50.085 | 30.00th=[15664], 40.00th=[16188], 50.00th=[16450], 60.00th=[16909], 00:12:50.085 | 70.00th=[18482], 80.00th=[19268], 90.00th=[20317], 95.00th=[21890], 00:12:50.085 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28181], 99.95th=[32113], 00:12:50.085 | 99.99th=[32375] 00:12:50.085 bw ( KiB/s): min=12288, max=12312, per=19.91%, avg=12300.00, stdev=16.97, samples=2 00:12:50.085 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:12:50.086 lat (usec) : 750=0.26% 00:12:50.086 lat (msec) : 2=0.02%, 10=2.00%, 20=52.03%, 50=45.69% 00:12:50.086 cpu : usr=2.69%, sys=10.46%, ctx=174, majf=0, minf=5 00:12:50.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:50.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:50.086 issued rwts: total=3065,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:50.086 job3: (groupid=0, jobs=1): err= 0: pid=71478: Thu Jul 11 05:59:05 2024 00:12:50.086 read: IOPS=4338, BW=16.9MiB/s (17.8MB/s)(17.0MiB/1002msec) 00:12:50.086 slat (usec): min=5, max=4163, avg=108.41, stdev=428.13 00:12:50.086 clat (usec): min=531, max=18717, avg=14127.12, stdev=1583.27 00:12:50.086 lat (usec): min=2480, max=19541, avg=14235.53, stdev=1615.30 00:12:50.086 clat percentiles (usec): 00:12:50.086 | 1.00th=[ 7373], 5.00th=[12125], 10.00th=[13042], 20.00th=[13698], 00:12:50.086 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:12:50.086 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15795], 95.00th=[16450], 00:12:50.086 | 99.00th=[17433], 99.50th=[17433], 99.90th=[17695], 99.95th=[17957], 00:12:50.086 | 99.99th=[18744] 00:12:50.086 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:12:50.086 slat (usec): min=11, max=4468, avg=106.59, stdev=466.85 00:12:50.086 clat (usec): min=10352, max=19812, avg=14138.13, stdev=1228.12 00:12:50.086 lat (usec): min=10378, max=19862, avg=14244.72, stdev=1297.72 00:12:50.086 clat percentiles (usec): 00:12:50.086 | 1.00th=[11338], 5.00th=[12780], 10.00th=[12911], 20.00th=[13173], 00:12:50.086 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:12:50.086 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15926], 95.00th=[16450], 00:12:50.086 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18744], 99.95th=[19006], 00:12:50.086 | 99.99th=[19792] 00:12:50.086 bw ( KiB/s): min=18104, max=18797, per=29.87%, avg=18450.50, stdev=490.02, samples=2 00:12:50.086 iops : min= 4526, max= 4699, avg=4612.50, stdev=122.33, samples=2 00:12:50.086 lat (usec) : 750=0.01% 00:12:50.086 lat (msec) : 4=0.22%, 10=0.49%, 20=99.27% 00:12:50.086 cpu : usr=3.50%, sys=14.49%, ctx=423, majf=0, minf=1 00:12:50.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:50.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:50.086 issued rwts: total=4347,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:50.086 00:12:50.086 Run status group 0 (all jobs): 00:12:50.086 READ: bw=56.7MiB/s (59.4MB/s), 8184KiB/s-19.9MiB/s (8380kB/s-20.9MB/s), io=57.0MiB (59.7MB), run=1001-1005msec 00:12:50.086 WRITE: bw=60.3MiB/s (63.2MB/s), 9.87MiB/s-20.6MiB/s (10.4MB/s-21.6MB/s), io=60.6MiB (63.6MB), run=1001-1005msec 00:12:50.086 00:12:50.086 Disk stats (read/write): 00:12:50.086 nvme0n1: ios=4412/4608, merge=0/0, ticks=52001/48807, in_queue=100808, util=88.47% 00:12:50.086 nvme0n2: ios=1585/1888, merge=0/0, ticks=14397/11409, in_queue=25806, util=90.09% 00:12:50.086 nvme0n3: ios=2399/2560, merge=0/0, ticks=59427/41639, in_queue=101066, util=90.02% 00:12:50.086 nvme0n4: ios=3698/4096, merge=0/0, ticks=16729/16323, in_queue=33052, util=89.96% 00:12:50.086 05:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:50.086 05:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=71498 00:12:50.086 05:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:50.086 05:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:50.086 [global] 00:12:50.086 thread=1 00:12:50.086 invalidate=1 00:12:50.086 rw=read 00:12:50.086 time_based=1 00:12:50.086 runtime=10 00:12:50.086 ioengine=libaio 00:12:50.086 direct=1 00:12:50.086 bs=4096 00:12:50.086 iodepth=1 00:12:50.086 norandommap=1 00:12:50.086 numjobs=1 00:12:50.086 00:12:50.086 [job0] 00:12:50.086 filename=/dev/nvme0n1 00:12:50.086 [job1] 00:12:50.086 filename=/dev/nvme0n2 00:12:50.086 [job2] 00:12:50.086 filename=/dev/nvme0n3 00:12:50.086 [job3] 00:12:50.086 filename=/dev/nvme0n4 00:12:50.086 Could not set queue depth (nvme0n1) 00:12:50.086 Could not set queue depth (nvme0n2) 00:12:50.086 Could not set queue depth (nvme0n3) 00:12:50.086 Could not set queue depth (nvme0n4) 00:12:50.086 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:50.086 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:50.086 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:50.086 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:50.086 fio-3.35 00:12:50.086 Starting 4 threads 00:12:53.371 05:59:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:53.371 fio: pid=71541, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:53.371 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=43499520, buflen=4096 00:12:53.371 05:59:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:53.371 fio: pid=71540, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:53.371 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=57597952, buflen=4096 00:12:53.371 05:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:53.371 05:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:53.630 fio: pid=71538, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:53.630 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=47636480, buflen=4096 00:12:53.889 05:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:53.889 05:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:53.889 fio: pid=71539, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:53.889 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=8675328, buflen=4096 00:12:54.149 00:12:54.149 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71538: Thu Jul 11 05:59:09 2024 00:12:54.149 read: IOPS=3337, BW=13.0MiB/s (13.7MB/s)(45.4MiB/3485msec) 00:12:54.149 slat (usec): min=8, max=16765, avg=19.42, stdev=220.66 00:12:54.149 clat (usec): min=149, max=7572, avg=278.78, stdev=102.38 00:12:54.149 lat (usec): min=164, max=16976, avg=298.19, stdev=242.65 00:12:54.149 clat percentiles (usec): 00:12:54.149 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 202], 20.00th=[ 258], 00:12:54.149 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:12:54.149 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 343], 00:12:54.149 | 99.00th=[ 392], 99.50th=[ 433], 99.90th=[ 709], 99.95th=[ 1647], 00:12:54.149 | 99.99th=[ 4178] 00:12:54.149 bw ( KiB/s): min=11440, max=13416, per=22.64%, avg=12875.50, stdev=791.65, samples=6 00:12:54.149 iops : min= 2860, max= 3354, avg=3218.83, stdev=197.89, samples=6 00:12:54.149 lat (usec) : 250=16.49%, 500=83.22%, 750=0.19%, 1000=0.03% 00:12:54.149 lat (msec) : 2=0.03%, 4=0.01%, 10=0.03% 00:12:54.149 cpu : usr=1.55%, sys=4.28%, ctx=11637, majf=0, minf=1 00:12:54.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:54.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.149 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.149 issued rwts: total=11631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:54.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:54.149 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71539: Thu Jul 11 05:59:09 2024 00:12:54.149 read: IOPS=4799, BW=18.7MiB/s (19.7MB/s)(72.3MiB/3855msec) 00:12:54.149 slat (usec): min=7, max=8906, avg=16.83, stdev=139.85 00:12:54.149 clat (usec): min=138, max=2068, avg=190.05, stdev=40.12 00:12:54.149 lat (usec): min=150, max=9149, avg=206.89, stdev=147.27 00:12:54.149 clat percentiles (usec): 00:12:54.149 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:12:54.149 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:12:54.149 | 70.00th=[ 196], 80.00th=[ 206], 90.00th=[ 223], 95.00th=[ 237], 00:12:54.149 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 578], 99.95th=[ 766], 00:12:54.149 | 99.99th=[ 2040] 00:12:54.149 bw ( KiB/s): min=17230, max=20224, per=33.75%, avg=19195.00, stdev=1087.20, samples=7 00:12:54.149 iops : min= 4307, max= 5056, avg=4798.57, stdev=271.96, samples=7 00:12:54.149 lat (usec) : 250=96.92%, 500=2.93%, 750=0.09%, 1000=0.02% 00:12:54.149 lat (msec) : 2=0.02%, 4=0.01% 00:12:54.149 cpu : usr=1.69%, sys=5.50%, ctx=18516, majf=0, minf=1 00:12:54.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:54.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.149 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.149 issued rwts: total=18503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:54.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:54.149 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71540: Thu Jul 11 05:59:09 2024 00:12:54.149 read: IOPS=4379, BW=17.1MiB/s (17.9MB/s)(54.9MiB/3211msec) 00:12:54.149 slat (usec): min=11, max=7801, avg=15.85, stdev=90.38 00:12:54.149 clat (usec): min=162, max=4770, avg=210.74, stdev=72.90 00:12:54.149 lat (usec): min=174, max=8014, avg=226.59, stdev=117.55 00:12:54.149 clat percentiles (usec): 00:12:54.149 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:12:54.149 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:12:54.149 | 70.00th=[ 204], 80.00th=[ 219], 90.00th=[ 306], 95.00th=[ 334], 00:12:54.149 | 99.00th=[ 379], 99.50th=[ 404], 99.90th=[ 676], 99.95th=[ 914], 00:12:54.149 | 99.99th=[ 2114] 00:12:54.149 bw ( KiB/s): min=11448, max=19656, per=30.72%, avg=17474.00, stdev=3184.95, samples=6 00:12:54.149 iops : min= 2862, max= 4914, avg=4368.50, stdev=796.24, samples=6 00:12:54.149 lat (usec) : 250=85.56%, 500=14.23%, 750=0.11%, 1000=0.04% 00:12:54.149 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:12:54.149 cpu : usr=1.65%, sys=5.70%, ctx=14065, majf=0, minf=1 00:12:54.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:54.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.149 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.149 issued rwts: total=14063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:54.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:54.149 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71541: Thu Jul 11 05:59:09 2024 00:12:54.149 read: IOPS=3613, BW=14.1MiB/s (14.8MB/s)(41.5MiB/2939msec) 00:12:54.149 slat (nsec): min=8077, max=86049, avg=12817.17, stdev=4783.23 00:12:54.149 clat (usec): min=163, max=1710, avg=262.58, stdev=53.11 00:12:54.149 lat (usec): min=178, max=1720, avg=275.39, stdev=52.21 00:12:54.149 clat percentiles (usec): 00:12:54.149 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 208], 00:12:54.149 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:12:54.149 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 326], 00:12:54.149 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 537], 99.95th=[ 824], 00:12:54.149 | 99.99th=[ 1614] 00:12:54.149 bw ( KiB/s): min=13304, max=18304, per=25.87%, avg=14712.00, stdev=2160.30, samples=5 00:12:54.149 iops : min= 3326, max= 4576, avg=3678.00, stdev=540.08, samples=5 00:12:54.149 lat (usec) : 250=28.78%, 500=71.09%, 750=0.06%, 1000=0.03% 00:12:54.149 lat (msec) : 2=0.04% 00:12:54.149 cpu : usr=0.58%, sys=4.53%, ctx=10622, majf=0, minf=1 00:12:54.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:54.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.149 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.149 issued rwts: total=10621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:54.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:54.149 00:12:54.149 Run status group 0 (all jobs): 00:12:54.149 READ: bw=55.5MiB/s (58.2MB/s), 13.0MiB/s-18.7MiB/s (13.7MB/s-19.7MB/s), io=214MiB (225MB), run=2939-3855msec 00:12:54.149 00:12:54.149 Disk stats (read/write): 00:12:54.149 nvme0n1: ios=11181/0, merge=0/0, ticks=3075/0, in_queue=3075, util=94.96% 00:12:54.149 nvme0n2: ios=17293/0, merge=0/0, ticks=3337/0, in_queue=3337, util=95.69% 00:12:54.149 nvme0n3: ios=13622/0, merge=0/0, ticks=2905/0, in_queue=2905, util=96.40% 00:12:54.149 nvme0n4: ios=10411/0, merge=0/0, ticks=2630/0, in_queue=2630, util=96.76% 00:12:54.149 05:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:54.149 05:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:54.717 05:59:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:54.717 05:59:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:54.976 05:59:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:54.976 05:59:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:55.235 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:55.235 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:55.803 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:55.803 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:56.063 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:56.063 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 71498 00:12:56.063 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:56.063 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.063 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.063 05:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:56.063 05:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.063 05:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:56.063 05:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:56.063 05:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.321 nvmf hotplug test: fio failed as expected 00:12:56.321 05:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:56.321 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:56.321 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:56.321 05:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.321 05:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:56.580 05:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:56.580 05:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:56.580 05:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:56.580 05:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.581 rmmod nvme_tcp 00:12:56.581 rmmod nvme_fabrics 00:12:56.581 rmmod nvme_keyring 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 71111 ']' 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 71111 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 71111 ']' 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 71111 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71111 00:12:56.581 killing process with pid 71111 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71111' 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 71111 00:12:56.581 05:59:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 71111 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:57.519 ************************************ 00:12:57.519 END TEST nvmf_fio_target 00:12:57.519 ************************************ 00:12:57.519 00:12:57.519 real 0m21.006s 00:12:57.519 user 1m16.756s 00:12:57.519 sys 0m10.689s 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:57.519 05:59:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.519 05:59:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:57.519 05:59:13 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:57.519 05:59:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:57.519 05:59:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.519 05:59:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.519 ************************************ 00:12:57.519 START TEST nvmf_bdevio 00:12:57.519 ************************************ 00:12:57.519 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:57.825 * Looking for test storage... 00:12:57.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:57.825 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:57.826 Cannot find device "nvmf_tgt_br" 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:57.826 Cannot find device "nvmf_tgt_br2" 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:57.826 Cannot find device "nvmf_tgt_br" 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:57.826 Cannot find device "nvmf_tgt_br2" 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:57.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:57.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:57.826 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:58.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:58.084 00:12:58.084 --- 10.0.0.2 ping statistics --- 00:12:58.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.084 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:58.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:58.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:12:58.084 00:12:58.084 --- 10.0.0.3 ping statistics --- 00:12:58.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.084 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:58.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:58.084 00:12:58.084 --- 10.0.0.1 ping statistics --- 00:12:58.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.084 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=71822 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 71822 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 71822 ']' 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.084 05:59:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.084 [2024-07-11 05:59:14.001881] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:12:58.084 [2024-07-11 05:59:14.002085] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.343 [2024-07-11 05:59:14.177540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.602 [2024-07-11 05:59:14.401089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.602 [2024-07-11 05:59:14.401175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.602 [2024-07-11 05:59:14.401190] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.602 [2024-07-11 05:59:14.401202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.602 [2024-07-11 05:59:14.401211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.602 [2024-07-11 05:59:14.401426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:58.602 [2024-07-11 05:59:14.402226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:58.602 [2024-07-11 05:59:14.402434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.602 [2024-07-11 05:59:14.402446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:58.861 [2024-07-11 05:59:14.580365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:59.120 05:59:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.120 05:59:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:12:59.120 05:59:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.120 05:59:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:59.120 05:59:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:59.121 05:59:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.121 05:59:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.121 05:59:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.121 05:59:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:59.121 [2024-07-11 05:59:14.998100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.121 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.121 05:59:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:59.121 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.121 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:59.379 Malloc0 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.379 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:59.379 [2024-07-11 05:59:15.102492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:59.380 { 00:12:59.380 "params": { 00:12:59.380 "name": "Nvme$subsystem", 00:12:59.380 "trtype": "$TEST_TRANSPORT", 00:12:59.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:59.380 "adrfam": "ipv4", 00:12:59.380 "trsvcid": "$NVMF_PORT", 00:12:59.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:59.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:59.380 "hdgst": ${hdgst:-false}, 00:12:59.380 "ddgst": ${ddgst:-false} 00:12:59.380 }, 00:12:59.380 "method": "bdev_nvme_attach_controller" 00:12:59.380 } 00:12:59.380 EOF 00:12:59.380 )") 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:59.380 05:59:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:59.380 "params": { 00:12:59.380 "name": "Nvme1", 00:12:59.380 "trtype": "tcp", 00:12:59.380 "traddr": "10.0.0.2", 00:12:59.380 "adrfam": "ipv4", 00:12:59.380 "trsvcid": "4420", 00:12:59.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:59.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:59.380 "hdgst": false, 00:12:59.380 "ddgst": false 00:12:59.380 }, 00:12:59.380 "method": "bdev_nvme_attach_controller" 00:12:59.380 }' 00:12:59.380 [2024-07-11 05:59:15.211036] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:12:59.380 [2024-07-11 05:59:15.211213] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71858 ] 00:12:59.639 [2024-07-11 05:59:15.385388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:59.897 [2024-07-11 05:59:15.614692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.897 [2024-07-11 05:59:15.614809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.897 [2024-07-11 05:59:15.615053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.897 [2024-07-11 05:59:15.805119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:00.155 I/O targets: 00:13:00.155 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:00.155 00:13:00.155 00:13:00.155 CUnit - A unit testing framework for C - Version 2.1-3 00:13:00.156 http://cunit.sourceforge.net/ 00:13:00.156 00:13:00.156 00:13:00.156 Suite: bdevio tests on: Nvme1n1 00:13:00.156 Test: blockdev write read block ...passed 00:13:00.156 Test: blockdev write zeroes read block ...passed 00:13:00.156 Test: blockdev write zeroes read no split ...passed 00:13:00.156 Test: blockdev write zeroes read split ...passed 00:13:00.156 Test: blockdev write zeroes read split partial ...passed 00:13:00.156 Test: blockdev reset ...[2024-07-11 05:59:16.058286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:00.156 [2024-07-11 05:59:16.058483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:13:00.414 [2024-07-11 05:59:16.079874] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:00.414 passed 00:13:00.414 Test: blockdev write read 8 blocks ...passed 00:13:00.414 Test: blockdev write read size > 128k ...passed 00:13:00.414 Test: blockdev write read invalid size ...passed 00:13:00.414 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.414 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.414 Test: blockdev write read max offset ...passed 00:13:00.414 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.414 Test: blockdev writev readv 8 blocks ...passed 00:13:00.414 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.414 Test: blockdev writev readv block ...passed 00:13:00.414 Test: blockdev writev readv size > 128k ...passed 00:13:00.414 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.414 Test: blockdev comparev and writev ...[2024-07-11 05:59:16.093457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.414 [2024-07-11 05:59:16.093538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:00.414 [2024-07-11 05:59:16.093574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.414 [2024-07-11 05:59:16.093596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:00.414 [2024-07-11 05:59:16.094228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.414 [2024-07-11 05:59:16.094296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:00.414 [2024-07-11 05:59:16.094327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.414 [2024-07-11 05:59:16.094347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:00.414 [2024-07-11 05:59:16.094823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.414 [2024-07-11 05:59:16.094870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:00.414 [2024-07-11 05:59:16.094900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.414 [2024-07-11 05:59:16.095032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:00.414 [2024-07-11 05:59:16.095644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.414 [2024-07-11 05:59:16.095721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:00.414 [2024-07-11 05:59:16.095752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:00.414 [2024-07-11 05:59:16.095772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:00.414 passed 00:13:00.414 Test: blockdev nvme passthru rw ...passed 00:13:00.414 Test: blockdev nvme passthru vendor specific ...[2024-07-11 05:59:16.097110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:00.414 [2024-07-11 05:59:16.097276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:00.414 [2024-07-11 05:59:16.097655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:00.414 [2024-07-11 05:59:16.097701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:00.414 [2024-07-11 05:59:16.097870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:00.414 [2024-07-11 05:59:16.097914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:00.414 [2024-07-11 05:59:16.098531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:00.414 [2024-07-11 05:59:16.098578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:00.414 passed 00:13:00.414 Test: blockdev nvme admin passthru ...passed 00:13:00.414 Test: blockdev copy ...passed 00:13:00.414 00:13:00.414 Run Summary: Type Total Ran Passed Failed Inactive 00:13:00.414 suites 1 1 n/a 0 0 00:13:00.414 tests 23 23 23 0 0 00:13:00.414 asserts 152 152 152 0 n/a 00:13:00.414 00:13:00.414 Elapsed time = 0.309 seconds 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.349 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.349 rmmod nvme_tcp 00:13:01.608 rmmod nvme_fabrics 00:13:01.608 rmmod nvme_keyring 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 71822 ']' 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 71822 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 71822 ']' 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 71822 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71822 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:01.608 killing process with pid 71822 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71822' 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 71822 00:13:01.608 05:59:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 71822 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:02.988 00:13:02.988 real 0m5.139s 00:13:02.988 user 0m19.765s 00:13:02.988 sys 0m0.906s 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.988 ************************************ 00:13:02.988 END TEST nvmf_bdevio 00:13:02.988 ************************************ 00:13:02.988 05:59:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:02.988 05:59:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:02.988 05:59:18 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:02.988 05:59:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:02.988 05:59:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.988 05:59:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.988 ************************************ 00:13:02.988 START TEST nvmf_auth_target 00:13:02.988 ************************************ 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:02.988 * Looking for test storage... 00:13:02.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.988 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:02.989 Cannot find device "nvmf_tgt_br" 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.989 Cannot find device "nvmf_tgt_br2" 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:02.989 Cannot find device "nvmf_tgt_br" 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:02.989 Cannot find device "nvmf_tgt_br2" 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:02.989 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:03.248 05:59:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:03.248 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:03.248 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:03.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:13:03.249 00:13:03.249 --- 10.0.0.2 ping statistics --- 00:13:03.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.249 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:03.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:03.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:13:03.249 00:13:03.249 --- 10.0.0.3 ping statistics --- 00:13:03.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.249 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:03.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:03.249 00:13:03.249 --- 10.0.0.1 ping statistics --- 00:13:03.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.249 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72091 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72091 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72091 ']' 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.249 05:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.185 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.185 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:04.185 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:04.185 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:04.185 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=72119 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6c21b9ffa45f2c66c6bb749cab5b6f5719fa365966e2aaae 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Cy5 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6c21b9ffa45f2c66c6bb749cab5b6f5719fa365966e2aaae 0 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6c21b9ffa45f2c66c6bb749cab5b6f5719fa365966e2aaae 0 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6c21b9ffa45f2c66c6bb749cab5b6f5719fa365966e2aaae 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Cy5 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Cy5 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Cy5 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=65f05f8e0b351bf761446fbcfe6af0db2459e9aeea34e3f93e972bf491b448ae 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.i1N 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 65f05f8e0b351bf761446fbcfe6af0db2459e9aeea34e3f93e972bf491b448ae 3 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 65f05f8e0b351bf761446fbcfe6af0db2459e9aeea34e3f93e972bf491b448ae 3 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=65f05f8e0b351bf761446fbcfe6af0db2459e9aeea34e3f93e972bf491b448ae 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.i1N 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.i1N 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.i1N 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=00cf94e729346086497a3d01209fa57a 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ilI 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 00cf94e729346086497a3d01209fa57a 1 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 00cf94e729346086497a3d01209fa57a 1 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=00cf94e729346086497a3d01209fa57a 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ilI 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ilI 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.ilI 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=76b752be4fd8c0d17512d0d12b26f90657ae26b581ef3517 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RWM 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 76b752be4fd8c0d17512d0d12b26f90657ae26b581ef3517 2 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 76b752be4fd8c0d17512d0d12b26f90657ae26b581ef3517 2 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=76b752be4fd8c0d17512d0d12b26f90657ae26b581ef3517 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:04.444 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:04.702 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RWM 00:13:04.702 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RWM 00:13:04.702 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.RWM 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2ced06a5182ada849ae55443bb3f8c8a63f9851de4792537 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xUu 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2ced06a5182ada849ae55443bb3f8c8a63f9851de4792537 2 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2ced06a5182ada849ae55443bb3f8c8a63f9851de4792537 2 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2ced06a5182ada849ae55443bb3f8c8a63f9851de4792537 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xUu 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xUu 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.xUu 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9b420ed34df30c509265fb0964b9c664 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ssj 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9b420ed34df30c509265fb0964b9c664 1 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9b420ed34df30c509265fb0964b9c664 1 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9b420ed34df30c509265fb0964b9c664 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ssj 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ssj 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.ssj 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bb4d784518de14b952cfd4102b1849dd7bb1081fa0ef53c0ed7e3cd18f3b6c08 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.X7U 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bb4d784518de14b952cfd4102b1849dd7bb1081fa0ef53c0ed7e3cd18f3b6c08 3 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bb4d784518de14b952cfd4102b1849dd7bb1081fa0ef53c0ed7e3cd18f3b6c08 3 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bb4d784518de14b952cfd4102b1849dd7bb1081fa0ef53c0ed7e3cd18f3b6c08 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.X7U 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.X7U 00:13:04.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.X7U 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 72091 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72091 ']' 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.703 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:04.961 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.961 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:04.961 05:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 72119 /var/tmp/host.sock 00:13:04.961 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72119 ']' 00:13:04.961 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:04.961 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.961 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:04.961 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.961 05:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Cy5 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Cy5 00:13:05.527 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Cy5 00:13:05.785 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.i1N ]] 00:13:05.785 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.i1N 00:13:05.785 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.785 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.785 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.785 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.i1N 00:13:05.785 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.i1N 00:13:06.044 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:06.044 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ilI 00:13:06.044 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.044 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.044 05:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.044 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ilI 00:13:06.044 05:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ilI 00:13:06.303 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.RWM ]] 00:13:06.303 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RWM 00:13:06.303 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.303 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.303 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.303 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RWM 00:13:06.303 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RWM 00:13:06.563 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:06.563 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xUu 00:13:06.563 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.563 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.563 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.563 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xUu 00:13:06.563 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xUu 00:13:06.822 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.ssj ]] 00:13:06.822 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ssj 00:13:06.822 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.822 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.822 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.822 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ssj 00:13:06.822 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ssj 00:13:07.081 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:07.081 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.X7U 00:13:07.081 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.081 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.081 05:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.081 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.X7U 00:13:07.081 05:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.X7U 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.340 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.599 00:13:07.866 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.866 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.866 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.129 { 00:13:08.129 "cntlid": 1, 00:13:08.129 "qid": 0, 00:13:08.129 "state": "enabled", 00:13:08.129 "thread": "nvmf_tgt_poll_group_000", 00:13:08.129 "listen_address": { 00:13:08.129 "trtype": "TCP", 00:13:08.129 "adrfam": "IPv4", 00:13:08.129 "traddr": "10.0.0.2", 00:13:08.129 "trsvcid": "4420" 00:13:08.129 }, 00:13:08.129 "peer_address": { 00:13:08.129 "trtype": "TCP", 00:13:08.129 "adrfam": "IPv4", 00:13:08.129 "traddr": "10.0.0.1", 00:13:08.129 "trsvcid": "40066" 00:13:08.129 }, 00:13:08.129 "auth": { 00:13:08.129 "state": "completed", 00:13:08.129 "digest": "sha256", 00:13:08.129 "dhgroup": "null" 00:13:08.129 } 00:13:08.129 } 00:13:08.129 ]' 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.129 05:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.387 05:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.575 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.833 00:13:12.833 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.833 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.833 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.092 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.092 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.092 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.093 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.093 05:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.093 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.093 { 00:13:13.093 "cntlid": 3, 00:13:13.093 "qid": 0, 00:13:13.093 "state": "enabled", 00:13:13.093 "thread": "nvmf_tgt_poll_group_000", 00:13:13.093 "listen_address": { 00:13:13.093 "trtype": "TCP", 00:13:13.093 "adrfam": "IPv4", 00:13:13.093 "traddr": "10.0.0.2", 00:13:13.093 "trsvcid": "4420" 00:13:13.093 }, 00:13:13.093 "peer_address": { 00:13:13.093 "trtype": "TCP", 00:13:13.093 "adrfam": "IPv4", 00:13:13.093 "traddr": "10.0.0.1", 00:13:13.093 "trsvcid": "56564" 00:13:13.093 }, 00:13:13.093 "auth": { 00:13:13.093 "state": "completed", 00:13:13.093 "digest": "sha256", 00:13:13.093 "dhgroup": "null" 00:13:13.093 } 00:13:13.093 } 00:13:13.093 ]' 00:13:13.093 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.093 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.093 05:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.351 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:13.351 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.351 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.351 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.351 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.608 05:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:13:14.174 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.174 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:14.174 05:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.174 05:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.174 05:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.174 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.174 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:14.174 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.432 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.999 00:13:14.999 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.999 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.999 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.999 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.999 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.999 05:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.999 05:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.257 05:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.257 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.257 { 00:13:15.257 "cntlid": 5, 00:13:15.257 "qid": 0, 00:13:15.257 "state": "enabled", 00:13:15.257 "thread": "nvmf_tgt_poll_group_000", 00:13:15.257 "listen_address": { 00:13:15.257 "trtype": "TCP", 00:13:15.257 "adrfam": "IPv4", 00:13:15.257 "traddr": "10.0.0.2", 00:13:15.257 "trsvcid": "4420" 00:13:15.257 }, 00:13:15.257 "peer_address": { 00:13:15.257 "trtype": "TCP", 00:13:15.257 "adrfam": "IPv4", 00:13:15.257 "traddr": "10.0.0.1", 00:13:15.257 "trsvcid": "56584" 00:13:15.257 }, 00:13:15.257 "auth": { 00:13:15.257 "state": "completed", 00:13:15.257 "digest": "sha256", 00:13:15.258 "dhgroup": "null" 00:13:15.258 } 00:13:15.258 } 00:13:15.258 ]' 00:13:15.258 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.258 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.258 05:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.258 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:15.258 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.258 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.258 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.258 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.516 05:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.451 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.709 00:13:16.968 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.968 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.968 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.226 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.226 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.226 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.226 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.226 05:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.226 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.226 { 00:13:17.226 "cntlid": 7, 00:13:17.226 "qid": 0, 00:13:17.226 "state": "enabled", 00:13:17.226 "thread": "nvmf_tgt_poll_group_000", 00:13:17.226 "listen_address": { 00:13:17.226 "trtype": "TCP", 00:13:17.226 "adrfam": "IPv4", 00:13:17.226 "traddr": "10.0.0.2", 00:13:17.226 "trsvcid": "4420" 00:13:17.226 }, 00:13:17.226 "peer_address": { 00:13:17.226 "trtype": "TCP", 00:13:17.226 "adrfam": "IPv4", 00:13:17.226 "traddr": "10.0.0.1", 00:13:17.226 "trsvcid": "56608" 00:13:17.226 }, 00:13:17.226 "auth": { 00:13:17.226 "state": "completed", 00:13:17.226 "digest": "sha256", 00:13:17.226 "dhgroup": "null" 00:13:17.226 } 00:13:17.226 } 00:13:17.226 ]' 00:13:17.226 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.226 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.226 05:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.226 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:17.226 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.226 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.226 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.226 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.484 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:13:18.050 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.050 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:18.050 05:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.050 05:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.050 05:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.050 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.050 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.050 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:18.050 05:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.308 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.570 00:13:18.570 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:18.570 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.570 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.839 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.104 { 00:13:19.104 "cntlid": 9, 00:13:19.104 "qid": 0, 00:13:19.104 "state": "enabled", 00:13:19.104 "thread": "nvmf_tgt_poll_group_000", 00:13:19.104 "listen_address": { 00:13:19.104 "trtype": "TCP", 00:13:19.104 "adrfam": "IPv4", 00:13:19.104 "traddr": "10.0.0.2", 00:13:19.104 "trsvcid": "4420" 00:13:19.104 }, 00:13:19.104 "peer_address": { 00:13:19.104 "trtype": "TCP", 00:13:19.104 "adrfam": "IPv4", 00:13:19.104 "traddr": "10.0.0.1", 00:13:19.104 "trsvcid": "56624" 00:13:19.104 }, 00:13:19.104 "auth": { 00:13:19.104 "state": "completed", 00:13:19.104 "digest": "sha256", 00:13:19.104 "dhgroup": "ffdhe2048" 00:13:19.104 } 00:13:19.104 } 00:13:19.104 ]' 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.104 05:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.363 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:13:20.299 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.299 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:20.299 05:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.299 05:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.299 05:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.299 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.299 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:20.299 05:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.299 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.558 00:13:20.816 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:20.816 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:20.817 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.817 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.817 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.817 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.817 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.817 05:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.817 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.817 { 00:13:20.817 "cntlid": 11, 00:13:20.817 "qid": 0, 00:13:20.817 "state": "enabled", 00:13:20.817 "thread": "nvmf_tgt_poll_group_000", 00:13:20.817 "listen_address": { 00:13:20.817 "trtype": "TCP", 00:13:20.817 "adrfam": "IPv4", 00:13:20.817 "traddr": "10.0.0.2", 00:13:20.817 "trsvcid": "4420" 00:13:20.817 }, 00:13:20.817 "peer_address": { 00:13:20.817 "trtype": "TCP", 00:13:20.817 "adrfam": "IPv4", 00:13:20.817 "traddr": "10.0.0.1", 00:13:20.817 "trsvcid": "56668" 00:13:20.817 }, 00:13:20.817 "auth": { 00:13:20.817 "state": "completed", 00:13:20.817 "digest": "sha256", 00:13:20.817 "dhgroup": "ffdhe2048" 00:13:20.817 } 00:13:20.817 } 00:13:20.817 ]' 00:13:20.817 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.075 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:21.075 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.075 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:21.075 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.075 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.075 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.075 05:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.333 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:13:21.900 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.900 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:21.900 05:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.900 05:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.900 05:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.900 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:21.900 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:21.900 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:22.158 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:22.158 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.158 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:22.158 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:22.158 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:22.158 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.158 05:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.158 05:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.158 05:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.158 05:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.158 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.158 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.417 00:13:22.417 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.417 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.417 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.676 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.676 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.676 05:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.676 05:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.676 05:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.676 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:22.676 { 00:13:22.676 "cntlid": 13, 00:13:22.676 "qid": 0, 00:13:22.676 "state": "enabled", 00:13:22.676 "thread": "nvmf_tgt_poll_group_000", 00:13:22.676 "listen_address": { 00:13:22.676 "trtype": "TCP", 00:13:22.676 "adrfam": "IPv4", 00:13:22.676 "traddr": "10.0.0.2", 00:13:22.676 "trsvcid": "4420" 00:13:22.676 }, 00:13:22.676 "peer_address": { 00:13:22.676 "trtype": "TCP", 00:13:22.676 "adrfam": "IPv4", 00:13:22.676 "traddr": "10.0.0.1", 00:13:22.676 "trsvcid": "52348" 00:13:22.676 }, 00:13:22.676 "auth": { 00:13:22.676 "state": "completed", 00:13:22.676 "digest": "sha256", 00:13:22.676 "dhgroup": "ffdhe2048" 00:13:22.676 } 00:13:22.676 } 00:13:22.676 ]' 00:13:22.676 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:22.936 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:22.936 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.936 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:22.936 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.936 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.936 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.936 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.194 05:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:13:23.762 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.762 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:23.762 05:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.762 05:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.762 05:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.762 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.762 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:23.762 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:24.020 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:24.020 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.021 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:24.021 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:24.021 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:24.021 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.021 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:13:24.021 05:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.021 05:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.021 05:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.021 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.021 05:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.279 00:13:24.279 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.279 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.279 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.538 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.538 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.538 05:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.538 05:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.538 05:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.538 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.538 { 00:13:24.538 "cntlid": 15, 00:13:24.538 "qid": 0, 00:13:24.538 "state": "enabled", 00:13:24.538 "thread": "nvmf_tgt_poll_group_000", 00:13:24.538 "listen_address": { 00:13:24.538 "trtype": "TCP", 00:13:24.538 "adrfam": "IPv4", 00:13:24.538 "traddr": "10.0.0.2", 00:13:24.538 "trsvcid": "4420" 00:13:24.538 }, 00:13:24.538 "peer_address": { 00:13:24.538 "trtype": "TCP", 00:13:24.538 "adrfam": "IPv4", 00:13:24.538 "traddr": "10.0.0.1", 00:13:24.538 "trsvcid": "52364" 00:13:24.538 }, 00:13:24.538 "auth": { 00:13:24.538 "state": "completed", 00:13:24.538 "digest": "sha256", 00:13:24.538 "dhgroup": "ffdhe2048" 00:13:24.538 } 00:13:24.538 } 00:13:24.538 ]' 00:13:24.538 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.797 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:24.797 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.797 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:24.797 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.797 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.797 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.797 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.055 05:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:13:25.622 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.622 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:25.622 05:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.622 05:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.622 05:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.622 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:25.622 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.622 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:25.622 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:25.880 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.881 05:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.447 00:13:26.447 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.447 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.447 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.705 { 00:13:26.705 "cntlid": 17, 00:13:26.705 "qid": 0, 00:13:26.705 "state": "enabled", 00:13:26.705 "thread": "nvmf_tgt_poll_group_000", 00:13:26.705 "listen_address": { 00:13:26.705 "trtype": "TCP", 00:13:26.705 "adrfam": "IPv4", 00:13:26.705 "traddr": "10.0.0.2", 00:13:26.705 "trsvcid": "4420" 00:13:26.705 }, 00:13:26.705 "peer_address": { 00:13:26.705 "trtype": "TCP", 00:13:26.705 "adrfam": "IPv4", 00:13:26.705 "traddr": "10.0.0.1", 00:13:26.705 "trsvcid": "52386" 00:13:26.705 }, 00:13:26.705 "auth": { 00:13:26.705 "state": "completed", 00:13:26.705 "digest": "sha256", 00:13:26.705 "dhgroup": "ffdhe3072" 00:13:26.705 } 00:13:26.705 } 00:13:26.705 ]' 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.705 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.963 05:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:13:27.529 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:27.807 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.808 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.808 05:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.808 05:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.808 05:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.808 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.808 05:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.376 00:13:28.376 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.376 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.376 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.634 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.634 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.634 05:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.635 { 00:13:28.635 "cntlid": 19, 00:13:28.635 "qid": 0, 00:13:28.635 "state": "enabled", 00:13:28.635 "thread": "nvmf_tgt_poll_group_000", 00:13:28.635 "listen_address": { 00:13:28.635 "trtype": "TCP", 00:13:28.635 "adrfam": "IPv4", 00:13:28.635 "traddr": "10.0.0.2", 00:13:28.635 "trsvcid": "4420" 00:13:28.635 }, 00:13:28.635 "peer_address": { 00:13:28.635 "trtype": "TCP", 00:13:28.635 "adrfam": "IPv4", 00:13:28.635 "traddr": "10.0.0.1", 00:13:28.635 "trsvcid": "52396" 00:13:28.635 }, 00:13:28.635 "auth": { 00:13:28.635 "state": "completed", 00:13:28.635 "digest": "sha256", 00:13:28.635 "dhgroup": "ffdhe3072" 00:13:28.635 } 00:13:28.635 } 00:13:28.635 ]' 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.635 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.893 05:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:13:29.459 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.459 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:29.459 05:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.459 05:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.459 05:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.459 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.459 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:29.459 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.718 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.976 00:13:29.976 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.976 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.976 05:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:30.235 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.235 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.235 05:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.235 05:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.235 05:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.235 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.235 { 00:13:30.235 "cntlid": 21, 00:13:30.235 "qid": 0, 00:13:30.235 "state": "enabled", 00:13:30.235 "thread": "nvmf_tgt_poll_group_000", 00:13:30.235 "listen_address": { 00:13:30.235 "trtype": "TCP", 00:13:30.235 "adrfam": "IPv4", 00:13:30.235 "traddr": "10.0.0.2", 00:13:30.235 "trsvcid": "4420" 00:13:30.235 }, 00:13:30.235 "peer_address": { 00:13:30.235 "trtype": "TCP", 00:13:30.235 "adrfam": "IPv4", 00:13:30.235 "traddr": "10.0.0.1", 00:13:30.235 "trsvcid": "52432" 00:13:30.235 }, 00:13:30.235 "auth": { 00:13:30.235 "state": "completed", 00:13:30.235 "digest": "sha256", 00:13:30.235 "dhgroup": "ffdhe3072" 00:13:30.235 } 00:13:30.235 } 00:13:30.235 ]' 00:13:30.235 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.493 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.493 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.493 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:30.493 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.493 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.493 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.493 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.751 05:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:13:31.315 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.315 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:31.315 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.315 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.315 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.315 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.315 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.315 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.573 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:31.831 00:13:31.831 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.831 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.831 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.089 { 00:13:32.089 "cntlid": 23, 00:13:32.089 "qid": 0, 00:13:32.089 "state": "enabled", 00:13:32.089 "thread": "nvmf_tgt_poll_group_000", 00:13:32.089 "listen_address": { 00:13:32.089 "trtype": "TCP", 00:13:32.089 "adrfam": "IPv4", 00:13:32.089 "traddr": "10.0.0.2", 00:13:32.089 "trsvcid": "4420" 00:13:32.089 }, 00:13:32.089 "peer_address": { 00:13:32.089 "trtype": "TCP", 00:13:32.089 "adrfam": "IPv4", 00:13:32.089 "traddr": "10.0.0.1", 00:13:32.089 "trsvcid": "42978" 00:13:32.089 }, 00:13:32.089 "auth": { 00:13:32.089 "state": "completed", 00:13:32.089 "digest": "sha256", 00:13:32.089 "dhgroup": "ffdhe3072" 00:13:32.089 } 00:13:32.089 } 00:13:32.089 ]' 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:32.089 05:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.347 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.347 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.347 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.605 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:13:33.171 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.171 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:33.171 05:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.171 05:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.171 05:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.171 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:33.171 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.171 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.171 05:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.429 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.687 00:13:33.687 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:33.687 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:33.687 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:33.946 { 00:13:33.946 "cntlid": 25, 00:13:33.946 "qid": 0, 00:13:33.946 "state": "enabled", 00:13:33.946 "thread": "nvmf_tgt_poll_group_000", 00:13:33.946 "listen_address": { 00:13:33.946 "trtype": "TCP", 00:13:33.946 "adrfam": "IPv4", 00:13:33.946 "traddr": "10.0.0.2", 00:13:33.946 "trsvcid": "4420" 00:13:33.946 }, 00:13:33.946 "peer_address": { 00:13:33.946 "trtype": "TCP", 00:13:33.946 "adrfam": "IPv4", 00:13:33.946 "traddr": "10.0.0.1", 00:13:33.946 "trsvcid": "43000" 00:13:33.946 }, 00:13:33.946 "auth": { 00:13:33.946 "state": "completed", 00:13:33.946 "digest": "sha256", 00:13:33.946 "dhgroup": "ffdhe4096" 00:13:33.946 } 00:13:33.946 } 00:13:33.946 ]' 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:33.946 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.204 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.204 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.204 05:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.462 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:13:35.028 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.028 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:35.028 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.028 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.028 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.028 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.028 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:35.028 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.287 05:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.547 00:13:35.547 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:35.547 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:35.547 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:35.806 { 00:13:35.806 "cntlid": 27, 00:13:35.806 "qid": 0, 00:13:35.806 "state": "enabled", 00:13:35.806 "thread": "nvmf_tgt_poll_group_000", 00:13:35.806 "listen_address": { 00:13:35.806 "trtype": "TCP", 00:13:35.806 "adrfam": "IPv4", 00:13:35.806 "traddr": "10.0.0.2", 00:13:35.806 "trsvcid": "4420" 00:13:35.806 }, 00:13:35.806 "peer_address": { 00:13:35.806 "trtype": "TCP", 00:13:35.806 "adrfam": "IPv4", 00:13:35.806 "traddr": "10.0.0.1", 00:13:35.806 "trsvcid": "43022" 00:13:35.806 }, 00:13:35.806 "auth": { 00:13:35.806 "state": "completed", 00:13:35.806 "digest": "sha256", 00:13:35.806 "dhgroup": "ffdhe4096" 00:13:35.806 } 00:13:35.806 } 00:13:35.806 ]' 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:35.806 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.064 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.064 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.065 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.065 05:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:13:36.998 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.998 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:36.998 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.998 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.998 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.998 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.998 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:36.998 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.258 05:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.517 00:13:37.517 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.518 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.518 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:37.777 { 00:13:37.777 "cntlid": 29, 00:13:37.777 "qid": 0, 00:13:37.777 "state": "enabled", 00:13:37.777 "thread": "nvmf_tgt_poll_group_000", 00:13:37.777 "listen_address": { 00:13:37.777 "trtype": "TCP", 00:13:37.777 "adrfam": "IPv4", 00:13:37.777 "traddr": "10.0.0.2", 00:13:37.777 "trsvcid": "4420" 00:13:37.777 }, 00:13:37.777 "peer_address": { 00:13:37.777 "trtype": "TCP", 00:13:37.777 "adrfam": "IPv4", 00:13:37.777 "traddr": "10.0.0.1", 00:13:37.777 "trsvcid": "43048" 00:13:37.777 }, 00:13:37.777 "auth": { 00:13:37.777 "state": "completed", 00:13:37.777 "digest": "sha256", 00:13:37.777 "dhgroup": "ffdhe4096" 00:13:37.777 } 00:13:37.777 } 00:13:37.777 ]' 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:37.777 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.036 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.036 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.036 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.294 05:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:13:38.858 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.858 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:38.858 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.858 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.858 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.858 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.858 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:38.858 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:39.117 05:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:39.394 00:13:39.394 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.394 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.394 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.670 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.670 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.670 05:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.670 05:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.670 05:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.670 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.670 { 00:13:39.670 "cntlid": 31, 00:13:39.670 "qid": 0, 00:13:39.670 "state": "enabled", 00:13:39.670 "thread": "nvmf_tgt_poll_group_000", 00:13:39.670 "listen_address": { 00:13:39.670 "trtype": "TCP", 00:13:39.670 "adrfam": "IPv4", 00:13:39.670 "traddr": "10.0.0.2", 00:13:39.670 "trsvcid": "4420" 00:13:39.670 }, 00:13:39.670 "peer_address": { 00:13:39.670 "trtype": "TCP", 00:13:39.670 "adrfam": "IPv4", 00:13:39.670 "traddr": "10.0.0.1", 00:13:39.670 "trsvcid": "43066" 00:13:39.670 }, 00:13:39.670 "auth": { 00:13:39.670 "state": "completed", 00:13:39.670 "digest": "sha256", 00:13:39.670 "dhgroup": "ffdhe4096" 00:13:39.670 } 00:13:39.670 } 00:13:39.670 ]' 00:13:39.670 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.670 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.670 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.940 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:39.941 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.941 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.941 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.941 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.198 05:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:13:40.763 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.763 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:40.763 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.763 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.764 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.764 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:40.764 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.764 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:40.764 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:41.021 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:13:41.021 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:41.021 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:41.021 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:41.021 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:41.021 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.021 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.021 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.021 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.021 05:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.022 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.022 05:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.280 00:13:41.280 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.280 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.280 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.538 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.538 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.538 05:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.538 05:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.538 05:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.538 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.538 { 00:13:41.538 "cntlid": 33, 00:13:41.538 "qid": 0, 00:13:41.538 "state": "enabled", 00:13:41.538 "thread": "nvmf_tgt_poll_group_000", 00:13:41.538 "listen_address": { 00:13:41.538 "trtype": "TCP", 00:13:41.538 "adrfam": "IPv4", 00:13:41.538 "traddr": "10.0.0.2", 00:13:41.538 "trsvcid": "4420" 00:13:41.538 }, 00:13:41.538 "peer_address": { 00:13:41.538 "trtype": "TCP", 00:13:41.538 "adrfam": "IPv4", 00:13:41.538 "traddr": "10.0.0.1", 00:13:41.538 "trsvcid": "43088" 00:13:41.539 }, 00:13:41.539 "auth": { 00:13:41.539 "state": "completed", 00:13:41.539 "digest": "sha256", 00:13:41.539 "dhgroup": "ffdhe6144" 00:13:41.539 } 00:13:41.539 } 00:13:41.539 ]' 00:13:41.539 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.539 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.539 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.797 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:41.797 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.797 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.797 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.797 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.056 05:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:13:42.623 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.623 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:42.623 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.623 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.623 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.623 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.623 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:42.623 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.882 05:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.452 00:13:43.452 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.452 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.452 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.712 { 00:13:43.712 "cntlid": 35, 00:13:43.712 "qid": 0, 00:13:43.712 "state": "enabled", 00:13:43.712 "thread": "nvmf_tgt_poll_group_000", 00:13:43.712 "listen_address": { 00:13:43.712 "trtype": "TCP", 00:13:43.712 "adrfam": "IPv4", 00:13:43.712 "traddr": "10.0.0.2", 00:13:43.712 "trsvcid": "4420" 00:13:43.712 }, 00:13:43.712 "peer_address": { 00:13:43.712 "trtype": "TCP", 00:13:43.712 "adrfam": "IPv4", 00:13:43.712 "traddr": "10.0.0.1", 00:13:43.712 "trsvcid": "40304" 00:13:43.712 }, 00:13:43.712 "auth": { 00:13:43.712 "state": "completed", 00:13:43.712 "digest": "sha256", 00:13:43.712 "dhgroup": "ffdhe6144" 00:13:43.712 } 00:13:43.712 } 00:13:43.712 ]' 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.712 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.974 05:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:13:44.541 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.541 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:44.541 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.541 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.541 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.541 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.541 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:44.541 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.801 06:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.369 00:13:45.369 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.369 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.369 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.626 { 00:13:45.626 "cntlid": 37, 00:13:45.626 "qid": 0, 00:13:45.626 "state": "enabled", 00:13:45.626 "thread": "nvmf_tgt_poll_group_000", 00:13:45.626 "listen_address": { 00:13:45.626 "trtype": "TCP", 00:13:45.626 "adrfam": "IPv4", 00:13:45.626 "traddr": "10.0.0.2", 00:13:45.626 "trsvcid": "4420" 00:13:45.626 }, 00:13:45.626 "peer_address": { 00:13:45.626 "trtype": "TCP", 00:13:45.626 "adrfam": "IPv4", 00:13:45.626 "traddr": "10.0.0.1", 00:13:45.626 "trsvcid": "40342" 00:13:45.626 }, 00:13:45.626 "auth": { 00:13:45.626 "state": "completed", 00:13:45.626 "digest": "sha256", 00:13:45.626 "dhgroup": "ffdhe6144" 00:13:45.626 } 00:13:45.626 } 00:13:45.626 ]' 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:45.626 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.904 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.904 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.904 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.904 06:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:46.845 06:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:47.410 00:13:47.410 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.410 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.410 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.670 { 00:13:47.670 "cntlid": 39, 00:13:47.670 "qid": 0, 00:13:47.670 "state": "enabled", 00:13:47.670 "thread": "nvmf_tgt_poll_group_000", 00:13:47.670 "listen_address": { 00:13:47.670 "trtype": "TCP", 00:13:47.670 "adrfam": "IPv4", 00:13:47.670 "traddr": "10.0.0.2", 00:13:47.670 "trsvcid": "4420" 00:13:47.670 }, 00:13:47.670 "peer_address": { 00:13:47.670 "trtype": "TCP", 00:13:47.670 "adrfam": "IPv4", 00:13:47.670 "traddr": "10.0.0.1", 00:13:47.670 "trsvcid": "40376" 00:13:47.670 }, 00:13:47.670 "auth": { 00:13:47.670 "state": "completed", 00:13:47.670 "digest": "sha256", 00:13:47.670 "dhgroup": "ffdhe6144" 00:13:47.670 } 00:13:47.670 } 00:13:47.670 ]' 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.670 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.263 06:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:13:48.830 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.830 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:48.830 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.830 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.830 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.830 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:48.830 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.830 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:48.830 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.087 06:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.653 00:13:49.653 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.653 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.653 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.912 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.912 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.912 06:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.912 06:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.912 06:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.912 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.912 { 00:13:49.912 "cntlid": 41, 00:13:49.912 "qid": 0, 00:13:49.912 "state": "enabled", 00:13:49.912 "thread": "nvmf_tgt_poll_group_000", 00:13:49.912 "listen_address": { 00:13:49.912 "trtype": "TCP", 00:13:49.912 "adrfam": "IPv4", 00:13:49.912 "traddr": "10.0.0.2", 00:13:49.912 "trsvcid": "4420" 00:13:49.912 }, 00:13:49.912 "peer_address": { 00:13:49.912 "trtype": "TCP", 00:13:49.912 "adrfam": "IPv4", 00:13:49.912 "traddr": "10.0.0.1", 00:13:49.912 "trsvcid": "40420" 00:13:49.912 }, 00:13:49.912 "auth": { 00:13:49.912 "state": "completed", 00:13:49.912 "digest": "sha256", 00:13:49.912 "dhgroup": "ffdhe8192" 00:13:49.912 } 00:13:49.912 } 00:13:49.912 ]' 00:13:49.912 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.912 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.912 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.170 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:50.170 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.170 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.170 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.170 06:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.429 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:13:50.996 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.996 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:50.996 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.996 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.996 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.996 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.996 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:50.996 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.254 06:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.820 00:13:51.820 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.820 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.820 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.078 { 00:13:52.078 "cntlid": 43, 00:13:52.078 "qid": 0, 00:13:52.078 "state": "enabled", 00:13:52.078 "thread": "nvmf_tgt_poll_group_000", 00:13:52.078 "listen_address": { 00:13:52.078 "trtype": "TCP", 00:13:52.078 "adrfam": "IPv4", 00:13:52.078 "traddr": "10.0.0.2", 00:13:52.078 "trsvcid": "4420" 00:13:52.078 }, 00:13:52.078 "peer_address": { 00:13:52.078 "trtype": "TCP", 00:13:52.078 "adrfam": "IPv4", 00:13:52.078 "traddr": "10.0.0.1", 00:13:52.078 "trsvcid": "40456" 00:13:52.078 }, 00:13:52.078 "auth": { 00:13:52.078 "state": "completed", 00:13:52.078 "digest": "sha256", 00:13:52.078 "dhgroup": "ffdhe8192" 00:13:52.078 } 00:13:52.078 } 00:13:52.078 ]' 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.078 06:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.336 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:13:53.282 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.282 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:53.282 06:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.282 06:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.282 06:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.282 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:53.282 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:53.282 06:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.543 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.111 00:13:54.111 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.111 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.111 06:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:54.370 { 00:13:54.370 "cntlid": 45, 00:13:54.370 "qid": 0, 00:13:54.370 "state": "enabled", 00:13:54.370 "thread": "nvmf_tgt_poll_group_000", 00:13:54.370 "listen_address": { 00:13:54.370 "trtype": "TCP", 00:13:54.370 "adrfam": "IPv4", 00:13:54.370 "traddr": "10.0.0.2", 00:13:54.370 "trsvcid": "4420" 00:13:54.370 }, 00:13:54.370 "peer_address": { 00:13:54.370 "trtype": "TCP", 00:13:54.370 "adrfam": "IPv4", 00:13:54.370 "traddr": "10.0.0.1", 00:13:54.370 "trsvcid": "58650" 00:13:54.370 }, 00:13:54.370 "auth": { 00:13:54.370 "state": "completed", 00:13:54.370 "digest": "sha256", 00:13:54.370 "dhgroup": "ffdhe8192" 00:13:54.370 } 00:13:54.370 } 00:13:54.370 ]' 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.370 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.628 06:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:13:55.194 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.194 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:55.194 06:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.195 06:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.195 06:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.195 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.195 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:55.195 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:55.453 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:56.020 00:13:56.020 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.020 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.020 06:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.278 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.278 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.278 06:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.278 06:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.536 06:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.536 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.536 { 00:13:56.536 "cntlid": 47, 00:13:56.536 "qid": 0, 00:13:56.536 "state": "enabled", 00:13:56.536 "thread": "nvmf_tgt_poll_group_000", 00:13:56.536 "listen_address": { 00:13:56.536 "trtype": "TCP", 00:13:56.536 "adrfam": "IPv4", 00:13:56.536 "traddr": "10.0.0.2", 00:13:56.536 "trsvcid": "4420" 00:13:56.536 }, 00:13:56.536 "peer_address": { 00:13:56.536 "trtype": "TCP", 00:13:56.536 "adrfam": "IPv4", 00:13:56.536 "traddr": "10.0.0.1", 00:13:56.536 "trsvcid": "58674" 00:13:56.536 }, 00:13:56.536 "auth": { 00:13:56.536 "state": "completed", 00:13:56.536 "digest": "sha256", 00:13:56.536 "dhgroup": "ffdhe8192" 00:13:56.536 } 00:13:56.536 } 00:13:56.536 ]' 00:13:56.536 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.536 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.536 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.536 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:56.536 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.536 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.536 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.536 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.793 06:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:13:57.360 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.360 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:57.360 06:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.360 06:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.360 06:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.360 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:57.360 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:57.360 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.360 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:57.360 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.618 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.875 00:13:58.132 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.132 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.132 06:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.132 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.132 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.132 06:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.132 06:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.132 06:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.132 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:58.132 { 00:13:58.132 "cntlid": 49, 00:13:58.132 "qid": 0, 00:13:58.132 "state": "enabled", 00:13:58.132 "thread": "nvmf_tgt_poll_group_000", 00:13:58.132 "listen_address": { 00:13:58.132 "trtype": "TCP", 00:13:58.132 "adrfam": "IPv4", 00:13:58.132 "traddr": "10.0.0.2", 00:13:58.132 "trsvcid": "4420" 00:13:58.132 }, 00:13:58.132 "peer_address": { 00:13:58.132 "trtype": "TCP", 00:13:58.132 "adrfam": "IPv4", 00:13:58.132 "traddr": "10.0.0.1", 00:13:58.132 "trsvcid": "58706" 00:13:58.132 }, 00:13:58.132 "auth": { 00:13:58.132 "state": "completed", 00:13:58.132 "digest": "sha384", 00:13:58.133 "dhgroup": "null" 00:13:58.133 } 00:13:58.133 } 00:13:58.133 ]' 00:13:58.133 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.390 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:58.390 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.390 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:58.390 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:58.390 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.390 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.390 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.648 06:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:13:59.581 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.581 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:13:59.581 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.581 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.581 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.581 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.582 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.839 00:13:59.840 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.840 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.840 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.097 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.097 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.097 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.097 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.097 06:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.097 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.097 { 00:14:00.097 "cntlid": 51, 00:14:00.097 "qid": 0, 00:14:00.097 "state": "enabled", 00:14:00.097 "thread": "nvmf_tgt_poll_group_000", 00:14:00.097 "listen_address": { 00:14:00.097 "trtype": "TCP", 00:14:00.097 "adrfam": "IPv4", 00:14:00.097 "traddr": "10.0.0.2", 00:14:00.097 "trsvcid": "4420" 00:14:00.097 }, 00:14:00.097 "peer_address": { 00:14:00.097 "trtype": "TCP", 00:14:00.097 "adrfam": "IPv4", 00:14:00.097 "traddr": "10.0.0.1", 00:14:00.097 "trsvcid": "58736" 00:14:00.097 }, 00:14:00.097 "auth": { 00:14:00.097 "state": "completed", 00:14:00.097 "digest": "sha384", 00:14:00.097 "dhgroup": "null" 00:14:00.097 } 00:14:00.097 } 00:14:00.097 ]' 00:14:00.097 06:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.097 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.097 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.357 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:00.357 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.357 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.357 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.357 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.615 06:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:14:01.181 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.181 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:01.181 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.181 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.181 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.181 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.181 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.181 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.439 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.004 00:14:02.004 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.004 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.004 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.262 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.262 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.262 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.262 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.262 06:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.262 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.262 { 00:14:02.262 "cntlid": 53, 00:14:02.262 "qid": 0, 00:14:02.262 "state": "enabled", 00:14:02.262 "thread": "nvmf_tgt_poll_group_000", 00:14:02.262 "listen_address": { 00:14:02.262 "trtype": "TCP", 00:14:02.262 "adrfam": "IPv4", 00:14:02.262 "traddr": "10.0.0.2", 00:14:02.262 "trsvcid": "4420" 00:14:02.262 }, 00:14:02.262 "peer_address": { 00:14:02.262 "trtype": "TCP", 00:14:02.262 "adrfam": "IPv4", 00:14:02.262 "traddr": "10.0.0.1", 00:14:02.262 "trsvcid": "58282" 00:14:02.262 }, 00:14:02.262 "auth": { 00:14:02.262 "state": "completed", 00:14:02.262 "digest": "sha384", 00:14:02.262 "dhgroup": "null" 00:14:02.262 } 00:14:02.262 } 00:14:02.262 ]' 00:14:02.262 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.262 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.262 06:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.262 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:02.262 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.262 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.262 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.262 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.521 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:14:03.087 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.087 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:03.088 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.088 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.088 06:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.088 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.088 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:03.088 06:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.346 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.604 00:14:03.604 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.604 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.604 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.862 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.862 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.862 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.862 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.862 06:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.862 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.862 { 00:14:03.862 "cntlid": 55, 00:14:03.862 "qid": 0, 00:14:03.862 "state": "enabled", 00:14:03.862 "thread": "nvmf_tgt_poll_group_000", 00:14:03.862 "listen_address": { 00:14:03.862 "trtype": "TCP", 00:14:03.862 "adrfam": "IPv4", 00:14:03.862 "traddr": "10.0.0.2", 00:14:03.862 "trsvcid": "4420" 00:14:03.862 }, 00:14:03.862 "peer_address": { 00:14:03.862 "trtype": "TCP", 00:14:03.862 "adrfam": "IPv4", 00:14:03.862 "traddr": "10.0.0.1", 00:14:03.862 "trsvcid": "58316" 00:14:03.862 }, 00:14:03.862 "auth": { 00:14:03.862 "state": "completed", 00:14:03.862 "digest": "sha384", 00:14:03.862 "dhgroup": "null" 00:14:03.862 } 00:14:03.862 } 00:14:03.862 ]' 00:14:03.862 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.862 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:03.862 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.119 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:04.119 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.119 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.119 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.119 06:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.377 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:14:04.944 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.944 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:04.944 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.944 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.944 06:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.944 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.944 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.944 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:04.944 06:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.202 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.461 00:14:05.461 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.461 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.461 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.718 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.718 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.718 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.718 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.718 06:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.718 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.718 { 00:14:05.718 "cntlid": 57, 00:14:05.718 "qid": 0, 00:14:05.718 "state": "enabled", 00:14:05.718 "thread": "nvmf_tgt_poll_group_000", 00:14:05.718 "listen_address": { 00:14:05.718 "trtype": "TCP", 00:14:05.718 "adrfam": "IPv4", 00:14:05.718 "traddr": "10.0.0.2", 00:14:05.718 "trsvcid": "4420" 00:14:05.718 }, 00:14:05.718 "peer_address": { 00:14:05.718 "trtype": "TCP", 00:14:05.718 "adrfam": "IPv4", 00:14:05.718 "traddr": "10.0.0.1", 00:14:05.719 "trsvcid": "58346" 00:14:05.719 }, 00:14:05.719 "auth": { 00:14:05.719 "state": "completed", 00:14:05.719 "digest": "sha384", 00:14:05.719 "dhgroup": "ffdhe2048" 00:14:05.719 } 00:14:05.719 } 00:14:05.719 ]' 00:14:05.719 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.977 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:05.977 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.977 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:05.977 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.977 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.977 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.977 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.234 06:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:14:06.800 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.800 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:06.800 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.800 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.800 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.800 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.800 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:06.800 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.059 06:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.318 00:14:07.318 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.318 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.318 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.576 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.576 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.576 06:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.576 06:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.576 06:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.576 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.576 { 00:14:07.576 "cntlid": 59, 00:14:07.576 "qid": 0, 00:14:07.576 "state": "enabled", 00:14:07.576 "thread": "nvmf_tgt_poll_group_000", 00:14:07.576 "listen_address": { 00:14:07.576 "trtype": "TCP", 00:14:07.576 "adrfam": "IPv4", 00:14:07.576 "traddr": "10.0.0.2", 00:14:07.576 "trsvcid": "4420" 00:14:07.576 }, 00:14:07.576 "peer_address": { 00:14:07.576 "trtype": "TCP", 00:14:07.576 "adrfam": "IPv4", 00:14:07.576 "traddr": "10.0.0.1", 00:14:07.576 "trsvcid": "58374" 00:14:07.576 }, 00:14:07.576 "auth": { 00:14:07.576 "state": "completed", 00:14:07.576 "digest": "sha384", 00:14:07.576 "dhgroup": "ffdhe2048" 00:14:07.576 } 00:14:07.576 } 00:14:07.576 ]' 00:14:07.576 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.577 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:07.577 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.839 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.839 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.839 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.839 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.839 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.097 06:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:14:08.679 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.679 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:08.679 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.679 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.679 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.679 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.679 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:08.679 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.938 06:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.196 00:14:09.196 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.196 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.196 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.454 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.454 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.454 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.454 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.454 06:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.454 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.454 { 00:14:09.454 "cntlid": 61, 00:14:09.454 "qid": 0, 00:14:09.454 "state": "enabled", 00:14:09.454 "thread": "nvmf_tgt_poll_group_000", 00:14:09.454 "listen_address": { 00:14:09.454 "trtype": "TCP", 00:14:09.454 "adrfam": "IPv4", 00:14:09.454 "traddr": "10.0.0.2", 00:14:09.454 "trsvcid": "4420" 00:14:09.454 }, 00:14:09.454 "peer_address": { 00:14:09.454 "trtype": "TCP", 00:14:09.454 "adrfam": "IPv4", 00:14:09.454 "traddr": "10.0.0.1", 00:14:09.454 "trsvcid": "58398" 00:14:09.454 }, 00:14:09.454 "auth": { 00:14:09.454 "state": "completed", 00:14:09.454 "digest": "sha384", 00:14:09.454 "dhgroup": "ffdhe2048" 00:14:09.454 } 00:14:09.454 } 00:14:09.454 ]' 00:14:09.454 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.454 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.454 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.713 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.713 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.713 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.713 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.713 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.972 06:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:14:10.566 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.566 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:10.566 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.566 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.566 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.567 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.567 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:10.567 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:10.824 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:11.082 00:14:11.082 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.082 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.082 06:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.340 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.340 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.340 06:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.340 06:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.340 06:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.340 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.340 { 00:14:11.340 "cntlid": 63, 00:14:11.340 "qid": 0, 00:14:11.340 "state": "enabled", 00:14:11.340 "thread": "nvmf_tgt_poll_group_000", 00:14:11.340 "listen_address": { 00:14:11.340 "trtype": "TCP", 00:14:11.340 "adrfam": "IPv4", 00:14:11.340 "traddr": "10.0.0.2", 00:14:11.340 "trsvcid": "4420" 00:14:11.340 }, 00:14:11.340 "peer_address": { 00:14:11.340 "trtype": "TCP", 00:14:11.340 "adrfam": "IPv4", 00:14:11.340 "traddr": "10.0.0.1", 00:14:11.340 "trsvcid": "58420" 00:14:11.340 }, 00:14:11.340 "auth": { 00:14:11.340 "state": "completed", 00:14:11.340 "digest": "sha384", 00:14:11.340 "dhgroup": "ffdhe2048" 00:14:11.340 } 00:14:11.340 } 00:14:11.340 ]' 00:14:11.340 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.741 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.741 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.741 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.741 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.741 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.741 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.741 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.741 06:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:14:12.677 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.677 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:12.677 06:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.677 06:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.677 06:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.677 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:12.677 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.677 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:12.677 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.935 06:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.194 00:14:13.194 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.194 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.194 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.453 { 00:14:13.453 "cntlid": 65, 00:14:13.453 "qid": 0, 00:14:13.453 "state": "enabled", 00:14:13.453 "thread": "nvmf_tgt_poll_group_000", 00:14:13.453 "listen_address": { 00:14:13.453 "trtype": "TCP", 00:14:13.453 "adrfam": "IPv4", 00:14:13.453 "traddr": "10.0.0.2", 00:14:13.453 "trsvcid": "4420" 00:14:13.453 }, 00:14:13.453 "peer_address": { 00:14:13.453 "trtype": "TCP", 00:14:13.453 "adrfam": "IPv4", 00:14:13.453 "traddr": "10.0.0.1", 00:14:13.453 "trsvcid": "56000" 00:14:13.453 }, 00:14:13.453 "auth": { 00:14:13.453 "state": "completed", 00:14:13.453 "digest": "sha384", 00:14:13.453 "dhgroup": "ffdhe3072" 00:14:13.453 } 00:14:13.453 } 00:14:13.453 ]' 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:13.453 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.711 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.711 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.712 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.970 06:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:14:14.538 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.538 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:14.538 06:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.538 06:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.538 06:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.538 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.538 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:14.538 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.796 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.054 00:14:15.054 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.054 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.054 06:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.313 { 00:14:15.313 "cntlid": 67, 00:14:15.313 "qid": 0, 00:14:15.313 "state": "enabled", 00:14:15.313 "thread": "nvmf_tgt_poll_group_000", 00:14:15.313 "listen_address": { 00:14:15.313 "trtype": "TCP", 00:14:15.313 "adrfam": "IPv4", 00:14:15.313 "traddr": "10.0.0.2", 00:14:15.313 "trsvcid": "4420" 00:14:15.313 }, 00:14:15.313 "peer_address": { 00:14:15.313 "trtype": "TCP", 00:14:15.313 "adrfam": "IPv4", 00:14:15.313 "traddr": "10.0.0.1", 00:14:15.313 "trsvcid": "56022" 00:14:15.313 }, 00:14:15.313 "auth": { 00:14:15.313 "state": "completed", 00:14:15.313 "digest": "sha384", 00:14:15.313 "dhgroup": "ffdhe3072" 00:14:15.313 } 00:14:15.313 } 00:14:15.313 ]' 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.313 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.572 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.572 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.572 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.572 06:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:14:16.139 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.139 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:16.139 06:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.139 06:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.139 06:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.139 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.139 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:16.139 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:16.397 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:14:16.397 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.397 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:16.397 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:16.397 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:16.397 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.397 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.397 06:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.397 06:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.656 06:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.656 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.656 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.915 00:14:16.915 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.915 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.915 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.174 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.174 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.174 06:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.174 06:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.174 06:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.174 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.174 { 00:14:17.174 "cntlid": 69, 00:14:17.174 "qid": 0, 00:14:17.174 "state": "enabled", 00:14:17.174 "thread": "nvmf_tgt_poll_group_000", 00:14:17.174 "listen_address": { 00:14:17.174 "trtype": "TCP", 00:14:17.174 "adrfam": "IPv4", 00:14:17.174 "traddr": "10.0.0.2", 00:14:17.174 "trsvcid": "4420" 00:14:17.174 }, 00:14:17.174 "peer_address": { 00:14:17.174 "trtype": "TCP", 00:14:17.174 "adrfam": "IPv4", 00:14:17.174 "traddr": "10.0.0.1", 00:14:17.174 "trsvcid": "56048" 00:14:17.174 }, 00:14:17.174 "auth": { 00:14:17.174 "state": "completed", 00:14:17.174 "digest": "sha384", 00:14:17.174 "dhgroup": "ffdhe3072" 00:14:17.174 } 00:14:17.174 } 00:14:17.174 ]' 00:14:17.174 06:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.174 06:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.174 06:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.174 06:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.174 06:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.433 06:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.433 06:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.433 06:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.691 06:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:14:18.256 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.256 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:18.256 06:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.256 06:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.256 06:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.256 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.256 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:18.256 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.515 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.774 00:14:18.774 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.774 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.774 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.341 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.341 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.341 06:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.341 06:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.341 06:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.341 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.341 { 00:14:19.341 "cntlid": 71, 00:14:19.341 "qid": 0, 00:14:19.341 "state": "enabled", 00:14:19.341 "thread": "nvmf_tgt_poll_group_000", 00:14:19.341 "listen_address": { 00:14:19.341 "trtype": "TCP", 00:14:19.341 "adrfam": "IPv4", 00:14:19.341 "traddr": "10.0.0.2", 00:14:19.341 "trsvcid": "4420" 00:14:19.341 }, 00:14:19.341 "peer_address": { 00:14:19.341 "trtype": "TCP", 00:14:19.341 "adrfam": "IPv4", 00:14:19.341 "traddr": "10.0.0.1", 00:14:19.341 "trsvcid": "56074" 00:14:19.341 }, 00:14:19.341 "auth": { 00:14:19.341 "state": "completed", 00:14:19.341 "digest": "sha384", 00:14:19.341 "dhgroup": "ffdhe3072" 00:14:19.341 } 00:14:19.341 } 00:14:19.341 ]' 00:14:19.341 06:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.341 06:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.341 06:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.341 06:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:19.341 06:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.341 06:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.341 06:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.341 06:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.599 06:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:14:20.165 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.165 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:20.165 06:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.165 06:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.165 06:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.165 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:20.165 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.165 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.165 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.424 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.991 00:14:20.991 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.991 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.991 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.249 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.249 06:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.249 06:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.249 06:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.249 06:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.249 06:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.249 { 00:14:21.249 "cntlid": 73, 00:14:21.249 "qid": 0, 00:14:21.249 "state": "enabled", 00:14:21.249 "thread": "nvmf_tgt_poll_group_000", 00:14:21.249 "listen_address": { 00:14:21.249 "trtype": "TCP", 00:14:21.249 "adrfam": "IPv4", 00:14:21.249 "traddr": "10.0.0.2", 00:14:21.249 "trsvcid": "4420" 00:14:21.249 }, 00:14:21.249 "peer_address": { 00:14:21.249 "trtype": "TCP", 00:14:21.249 "adrfam": "IPv4", 00:14:21.249 "traddr": "10.0.0.1", 00:14:21.249 "trsvcid": "56104" 00:14:21.249 }, 00:14:21.249 "auth": { 00:14:21.249 "state": "completed", 00:14:21.249 "digest": "sha384", 00:14:21.249 "dhgroup": "ffdhe4096" 00:14:21.249 } 00:14:21.249 } 00:14:21.249 ]' 00:14:21.249 06:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.249 06:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.249 06:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.249 06:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.249 06:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.249 06:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.249 06:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.249 06:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.508 06:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.443 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.010 00:14:23.010 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.010 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.010 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.269 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.269 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.269 06:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.269 06:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.269 06:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.269 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.269 { 00:14:23.269 "cntlid": 75, 00:14:23.269 "qid": 0, 00:14:23.269 "state": "enabled", 00:14:23.269 "thread": "nvmf_tgt_poll_group_000", 00:14:23.269 "listen_address": { 00:14:23.269 "trtype": "TCP", 00:14:23.269 "adrfam": "IPv4", 00:14:23.269 "traddr": "10.0.0.2", 00:14:23.269 "trsvcid": "4420" 00:14:23.269 }, 00:14:23.269 "peer_address": { 00:14:23.269 "trtype": "TCP", 00:14:23.269 "adrfam": "IPv4", 00:14:23.269 "traddr": "10.0.0.1", 00:14:23.269 "trsvcid": "60536" 00:14:23.269 }, 00:14:23.269 "auth": { 00:14:23.269 "state": "completed", 00:14:23.269 "digest": "sha384", 00:14:23.269 "dhgroup": "ffdhe4096" 00:14:23.269 } 00:14:23.269 } 00:14:23.269 ]' 00:14:23.269 06:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.269 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.269 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.269 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.269 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.269 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.269 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.269 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.527 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:14:24.093 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.093 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:24.093 06:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.093 06:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.093 06:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.093 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.093 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:24.093 06:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.351 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.918 00:14:24.918 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.918 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.918 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.918 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.918 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.918 06:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.918 06:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.918 06:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.918 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.918 { 00:14:24.918 "cntlid": 77, 00:14:24.918 "qid": 0, 00:14:24.918 "state": "enabled", 00:14:24.918 "thread": "nvmf_tgt_poll_group_000", 00:14:24.918 "listen_address": { 00:14:24.918 "trtype": "TCP", 00:14:24.918 "adrfam": "IPv4", 00:14:24.918 "traddr": "10.0.0.2", 00:14:24.918 "trsvcid": "4420" 00:14:24.918 }, 00:14:24.918 "peer_address": { 00:14:24.918 "trtype": "TCP", 00:14:24.918 "adrfam": "IPv4", 00:14:24.918 "traddr": "10.0.0.1", 00:14:24.918 "trsvcid": "60558" 00:14:24.918 }, 00:14:24.918 "auth": { 00:14:24.918 "state": "completed", 00:14:24.918 "digest": "sha384", 00:14:24.918 "dhgroup": "ffdhe4096" 00:14:24.918 } 00:14:24.918 } 00:14:24.918 ]' 00:14:24.918 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.176 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:25.176 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.176 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.176 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.176 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.176 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.176 06:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.433 06:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:14:26.000 06:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.000 06:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:26.000 06:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.000 06:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.258 06:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.258 06:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.258 06:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:26.258 06:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:26.516 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:14:26.516 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.516 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:26.516 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:26.516 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:26.517 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.517 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:14:26.517 06:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.517 06:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.517 06:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.517 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.517 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.775 00:14:26.775 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.775 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.775 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.034 { 00:14:27.034 "cntlid": 79, 00:14:27.034 "qid": 0, 00:14:27.034 "state": "enabled", 00:14:27.034 "thread": "nvmf_tgt_poll_group_000", 00:14:27.034 "listen_address": { 00:14:27.034 "trtype": "TCP", 00:14:27.034 "adrfam": "IPv4", 00:14:27.034 "traddr": "10.0.0.2", 00:14:27.034 "trsvcid": "4420" 00:14:27.034 }, 00:14:27.034 "peer_address": { 00:14:27.034 "trtype": "TCP", 00:14:27.034 "adrfam": "IPv4", 00:14:27.034 "traddr": "10.0.0.1", 00:14:27.034 "trsvcid": "60588" 00:14:27.034 }, 00:14:27.034 "auth": { 00:14:27.034 "state": "completed", 00:14:27.034 "digest": "sha384", 00:14:27.034 "dhgroup": "ffdhe4096" 00:14:27.034 } 00:14:27.034 } 00:14:27.034 ]' 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.034 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.292 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.292 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.292 06:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.551 06:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:14:28.118 06:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.118 06:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:28.118 06:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.118 06:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.118 06:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.118 06:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.118 06:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.118 06:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:28.118 06:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.377 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.988 00:14:28.988 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.988 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.988 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.246 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.246 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.246 06:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.246 06:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.246 06:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.246 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.246 { 00:14:29.246 "cntlid": 81, 00:14:29.246 "qid": 0, 00:14:29.246 "state": "enabled", 00:14:29.246 "thread": "nvmf_tgt_poll_group_000", 00:14:29.246 "listen_address": { 00:14:29.246 "trtype": "TCP", 00:14:29.246 "adrfam": "IPv4", 00:14:29.246 "traddr": "10.0.0.2", 00:14:29.246 "trsvcid": "4420" 00:14:29.246 }, 00:14:29.246 "peer_address": { 00:14:29.246 "trtype": "TCP", 00:14:29.246 "adrfam": "IPv4", 00:14:29.246 "traddr": "10.0.0.1", 00:14:29.246 "trsvcid": "60626" 00:14:29.246 }, 00:14:29.246 "auth": { 00:14:29.246 "state": "completed", 00:14:29.246 "digest": "sha384", 00:14:29.246 "dhgroup": "ffdhe6144" 00:14:29.246 } 00:14:29.246 } 00:14:29.246 ]' 00:14:29.246 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.246 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:29.246 06:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.246 06:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:29.246 06:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.246 06:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.246 06:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.246 06:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.504 06:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:14:30.436 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.436 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:30.436 06:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.436 06:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.436 06:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.436 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.436 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:30.436 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:30.693 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:14:30.693 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.693 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:30.693 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:30.693 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:30.693 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.693 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.693 06:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.693 06:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.694 06:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.694 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.694 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.951 00:14:30.951 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.951 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.951 06:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.209 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.209 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.209 06:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.209 06:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.209 06:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.209 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.209 { 00:14:31.209 "cntlid": 83, 00:14:31.209 "qid": 0, 00:14:31.209 "state": "enabled", 00:14:31.209 "thread": "nvmf_tgt_poll_group_000", 00:14:31.209 "listen_address": { 00:14:31.209 "trtype": "TCP", 00:14:31.209 "adrfam": "IPv4", 00:14:31.209 "traddr": "10.0.0.2", 00:14:31.209 "trsvcid": "4420" 00:14:31.209 }, 00:14:31.209 "peer_address": { 00:14:31.209 "trtype": "TCP", 00:14:31.209 "adrfam": "IPv4", 00:14:31.209 "traddr": "10.0.0.1", 00:14:31.209 "trsvcid": "60650" 00:14:31.209 }, 00:14:31.209 "auth": { 00:14:31.209 "state": "completed", 00:14:31.209 "digest": "sha384", 00:14:31.209 "dhgroup": "ffdhe6144" 00:14:31.209 } 00:14:31.209 } 00:14:31.209 ]' 00:14:31.209 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.209 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:31.209 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.466 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:31.466 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.466 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.466 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.466 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.723 06:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:14:32.288 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.288 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:32.288 06:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.288 06:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.288 06:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.288 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.288 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:32.288 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.545 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.803 00:14:33.062 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.062 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.062 06:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.320 { 00:14:33.320 "cntlid": 85, 00:14:33.320 "qid": 0, 00:14:33.320 "state": "enabled", 00:14:33.320 "thread": "nvmf_tgt_poll_group_000", 00:14:33.320 "listen_address": { 00:14:33.320 "trtype": "TCP", 00:14:33.320 "adrfam": "IPv4", 00:14:33.320 "traddr": "10.0.0.2", 00:14:33.320 "trsvcid": "4420" 00:14:33.320 }, 00:14:33.320 "peer_address": { 00:14:33.320 "trtype": "TCP", 00:14:33.320 "adrfam": "IPv4", 00:14:33.320 "traddr": "10.0.0.1", 00:14:33.320 "trsvcid": "38316" 00:14:33.320 }, 00:14:33.320 "auth": { 00:14:33.320 "state": "completed", 00:14:33.320 "digest": "sha384", 00:14:33.320 "dhgroup": "ffdhe6144" 00:14:33.320 } 00:14:33.320 } 00:14:33.320 ]' 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.320 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.577 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:14:34.143 06:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.143 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:34.143 06:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.143 06:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.143 06:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.143 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.143 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:34.143 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.401 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.967 00:14:34.967 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.967 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.967 06:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.225 { 00:14:35.225 "cntlid": 87, 00:14:35.225 "qid": 0, 00:14:35.225 "state": "enabled", 00:14:35.225 "thread": "nvmf_tgt_poll_group_000", 00:14:35.225 "listen_address": { 00:14:35.225 "trtype": "TCP", 00:14:35.225 "adrfam": "IPv4", 00:14:35.225 "traddr": "10.0.0.2", 00:14:35.225 "trsvcid": "4420" 00:14:35.225 }, 00:14:35.225 "peer_address": { 00:14:35.225 "trtype": "TCP", 00:14:35.225 "adrfam": "IPv4", 00:14:35.225 "traddr": "10.0.0.1", 00:14:35.225 "trsvcid": "38352" 00:14:35.225 }, 00:14:35.225 "auth": { 00:14:35.225 "state": "completed", 00:14:35.225 "digest": "sha384", 00:14:35.225 "dhgroup": "ffdhe6144" 00:14:35.225 } 00:14:35.225 } 00:14:35.225 ]' 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.225 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.791 06:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:14:36.358 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.358 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:36.358 06:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.358 06:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.358 06:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.358 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.358 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.358 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:36.358 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.617 06:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.183 00:14:37.183 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.183 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.183 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.442 { 00:14:37.442 "cntlid": 89, 00:14:37.442 "qid": 0, 00:14:37.442 "state": "enabled", 00:14:37.442 "thread": "nvmf_tgt_poll_group_000", 00:14:37.442 "listen_address": { 00:14:37.442 "trtype": "TCP", 00:14:37.442 "adrfam": "IPv4", 00:14:37.442 "traddr": "10.0.0.2", 00:14:37.442 "trsvcid": "4420" 00:14:37.442 }, 00:14:37.442 "peer_address": { 00:14:37.442 "trtype": "TCP", 00:14:37.442 "adrfam": "IPv4", 00:14:37.442 "traddr": "10.0.0.1", 00:14:37.442 "trsvcid": "38388" 00:14:37.442 }, 00:14:37.442 "auth": { 00:14:37.442 "state": "completed", 00:14:37.442 "digest": "sha384", 00:14:37.442 "dhgroup": "ffdhe8192" 00:14:37.442 } 00:14:37.442 } 00:14:37.442 ]' 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.442 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.023 06:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.589 06:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.154 00:14:39.154 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.154 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.154 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.412 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.412 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.412 06:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.412 06:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.412 06:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.412 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.412 { 00:14:39.412 "cntlid": 91, 00:14:39.412 "qid": 0, 00:14:39.412 "state": "enabled", 00:14:39.412 "thread": "nvmf_tgt_poll_group_000", 00:14:39.412 "listen_address": { 00:14:39.412 "trtype": "TCP", 00:14:39.412 "adrfam": "IPv4", 00:14:39.412 "traddr": "10.0.0.2", 00:14:39.412 "trsvcid": "4420" 00:14:39.412 }, 00:14:39.412 "peer_address": { 00:14:39.412 "trtype": "TCP", 00:14:39.412 "adrfam": "IPv4", 00:14:39.412 "traddr": "10.0.0.1", 00:14:39.412 "trsvcid": "38430" 00:14:39.412 }, 00:14:39.412 "auth": { 00:14:39.412 "state": "completed", 00:14:39.412 "digest": "sha384", 00:14:39.412 "dhgroup": "ffdhe8192" 00:14:39.412 } 00:14:39.412 } 00:14:39.412 ]' 00:14:39.412 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.671 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.671 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.671 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:39.671 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.671 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.671 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.671 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.929 06:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:14:40.496 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.496 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:40.496 06:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.496 06:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.496 06:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.496 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.496 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:40.496 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.754 06:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.321 00:14:41.321 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.321 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.321 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.578 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.578 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.578 06:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.578 06:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.578 06:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.578 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.578 { 00:14:41.578 "cntlid": 93, 00:14:41.578 "qid": 0, 00:14:41.578 "state": "enabled", 00:14:41.578 "thread": "nvmf_tgt_poll_group_000", 00:14:41.578 "listen_address": { 00:14:41.578 "trtype": "TCP", 00:14:41.578 "adrfam": "IPv4", 00:14:41.578 "traddr": "10.0.0.2", 00:14:41.578 "trsvcid": "4420" 00:14:41.579 }, 00:14:41.579 "peer_address": { 00:14:41.579 "trtype": "TCP", 00:14:41.579 "adrfam": "IPv4", 00:14:41.579 "traddr": "10.0.0.1", 00:14:41.579 "trsvcid": "38452" 00:14:41.579 }, 00:14:41.579 "auth": { 00:14:41.579 "state": "completed", 00:14:41.579 "digest": "sha384", 00:14:41.579 "dhgroup": "ffdhe8192" 00:14:41.579 } 00:14:41.579 } 00:14:41.579 ]' 00:14:41.579 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.579 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.579 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.836 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:41.836 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.836 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.836 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.836 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.094 06:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:14:42.660 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.660 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:42.660 06:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.660 06:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.660 06:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.660 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.660 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:42.660 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:42.917 06:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.483 00:14:43.483 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.483 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.483 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.741 { 00:14:43.741 "cntlid": 95, 00:14:43.741 "qid": 0, 00:14:43.741 "state": "enabled", 00:14:43.741 "thread": "nvmf_tgt_poll_group_000", 00:14:43.741 "listen_address": { 00:14:43.741 "trtype": "TCP", 00:14:43.741 "adrfam": "IPv4", 00:14:43.741 "traddr": "10.0.0.2", 00:14:43.741 "trsvcid": "4420" 00:14:43.741 }, 00:14:43.741 "peer_address": { 00:14:43.741 "trtype": "TCP", 00:14:43.741 "adrfam": "IPv4", 00:14:43.741 "traddr": "10.0.0.1", 00:14:43.741 "trsvcid": "41252" 00:14:43.741 }, 00:14:43.741 "auth": { 00:14:43.741 "state": "completed", 00:14:43.741 "digest": "sha384", 00:14:43.741 "dhgroup": "ffdhe8192" 00:14:43.741 } 00:14:43.741 } 00:14:43.741 ]' 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:43.741 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.742 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.742 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.742 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.000 06:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:14:44.586 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.586 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:44.586 06:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.586 06:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.586 06:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.586 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:44.586 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.586 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.586 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:44.586 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.859 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.117 00:14:45.117 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.117 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.117 06:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.375 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.375 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.375 06:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.375 06:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.375 06:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.375 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.375 { 00:14:45.375 "cntlid": 97, 00:14:45.375 "qid": 0, 00:14:45.375 "state": "enabled", 00:14:45.375 "thread": "nvmf_tgt_poll_group_000", 00:14:45.375 "listen_address": { 00:14:45.375 "trtype": "TCP", 00:14:45.375 "adrfam": "IPv4", 00:14:45.375 "traddr": "10.0.0.2", 00:14:45.375 "trsvcid": "4420" 00:14:45.375 }, 00:14:45.375 "peer_address": { 00:14:45.375 "trtype": "TCP", 00:14:45.375 "adrfam": "IPv4", 00:14:45.375 "traddr": "10.0.0.1", 00:14:45.375 "trsvcid": "41268" 00:14:45.375 }, 00:14:45.375 "auth": { 00:14:45.375 "state": "completed", 00:14:45.375 "digest": "sha512", 00:14:45.375 "dhgroup": "null" 00:14:45.375 } 00:14:45.375 } 00:14:45.375 ]' 00:14:45.375 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.633 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:45.633 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.633 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:45.633 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.633 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.633 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.633 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.892 06:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:14:46.458 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.458 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:46.458 06:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.458 06:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.458 06:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.458 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.458 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:46.458 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.716 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.974 00:14:46.974 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.974 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.974 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.232 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.232 06:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.232 06:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.232 06:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.232 06:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.232 06:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.232 { 00:14:47.232 "cntlid": 99, 00:14:47.232 "qid": 0, 00:14:47.232 "state": "enabled", 00:14:47.232 "thread": "nvmf_tgt_poll_group_000", 00:14:47.232 "listen_address": { 00:14:47.232 "trtype": "TCP", 00:14:47.232 "adrfam": "IPv4", 00:14:47.232 "traddr": "10.0.0.2", 00:14:47.232 "trsvcid": "4420" 00:14:47.232 }, 00:14:47.232 "peer_address": { 00:14:47.232 "trtype": "TCP", 00:14:47.232 "adrfam": "IPv4", 00:14:47.232 "traddr": "10.0.0.1", 00:14:47.232 "trsvcid": "41286" 00:14:47.232 }, 00:14:47.232 "auth": { 00:14:47.232 "state": "completed", 00:14:47.232 "digest": "sha512", 00:14:47.232 "dhgroup": "null" 00:14:47.232 } 00:14:47.232 } 00:14:47.232 ]' 00:14:47.232 06:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.232 06:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:47.232 06:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.232 06:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:47.232 06:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.491 06:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.491 06:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.491 06:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.491 06:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:14:48.423 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.423 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:48.423 06:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.423 06:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.423 06:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.424 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.988 00:14:48.988 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.988 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.988 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.988 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.988 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.988 06:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.988 06:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.246 06:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.246 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.246 { 00:14:49.246 "cntlid": 101, 00:14:49.246 "qid": 0, 00:14:49.246 "state": "enabled", 00:14:49.246 "thread": "nvmf_tgt_poll_group_000", 00:14:49.246 "listen_address": { 00:14:49.246 "trtype": "TCP", 00:14:49.246 "adrfam": "IPv4", 00:14:49.246 "traddr": "10.0.0.2", 00:14:49.246 "trsvcid": "4420" 00:14:49.246 }, 00:14:49.246 "peer_address": { 00:14:49.246 "trtype": "TCP", 00:14:49.246 "adrfam": "IPv4", 00:14:49.246 "traddr": "10.0.0.1", 00:14:49.246 "trsvcid": "41324" 00:14:49.246 }, 00:14:49.246 "auth": { 00:14:49.246 "state": "completed", 00:14:49.246 "digest": "sha512", 00:14:49.246 "dhgroup": "null" 00:14:49.246 } 00:14:49.246 } 00:14:49.246 ]' 00:14:49.246 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.246 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:49.246 06:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.246 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:49.246 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.246 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.246 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.246 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.504 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:14:50.070 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.070 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:50.070 06:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.070 06:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.070 06:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.070 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.070 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:50.070 06:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:50.328 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:50.587 00:14:50.587 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.587 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.587 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.845 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.845 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.845 06:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.845 06:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.845 06:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.845 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.845 { 00:14:50.845 "cntlid": 103, 00:14:50.845 "qid": 0, 00:14:50.845 "state": "enabled", 00:14:50.845 "thread": "nvmf_tgt_poll_group_000", 00:14:50.845 "listen_address": { 00:14:50.845 "trtype": "TCP", 00:14:50.845 "adrfam": "IPv4", 00:14:50.845 "traddr": "10.0.0.2", 00:14:50.845 "trsvcid": "4420" 00:14:50.845 }, 00:14:50.845 "peer_address": { 00:14:50.845 "trtype": "TCP", 00:14:50.845 "adrfam": "IPv4", 00:14:50.845 "traddr": "10.0.0.1", 00:14:50.845 "trsvcid": "41356" 00:14:50.845 }, 00:14:50.845 "auth": { 00:14:50.845 "state": "completed", 00:14:50.845 "digest": "sha512", 00:14:50.845 "dhgroup": "null" 00:14:50.845 } 00:14:50.845 } 00:14:50.845 ]' 00:14:50.845 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.845 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.845 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.103 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:51.103 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.103 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.103 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.103 06:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.361 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:14:51.926 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.926 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:51.926 06:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.926 06:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.926 06:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.926 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.926 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.926 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:51.926 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.184 06:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.442 00:14:52.442 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.442 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.442 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.700 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.700 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.700 06:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.700 06:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.700 06:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.700 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.700 { 00:14:52.700 "cntlid": 105, 00:14:52.700 "qid": 0, 00:14:52.700 "state": "enabled", 00:14:52.700 "thread": "nvmf_tgt_poll_group_000", 00:14:52.700 "listen_address": { 00:14:52.700 "trtype": "TCP", 00:14:52.700 "adrfam": "IPv4", 00:14:52.700 "traddr": "10.0.0.2", 00:14:52.700 "trsvcid": "4420" 00:14:52.700 }, 00:14:52.700 "peer_address": { 00:14:52.700 "trtype": "TCP", 00:14:52.700 "adrfam": "IPv4", 00:14:52.700 "traddr": "10.0.0.1", 00:14:52.700 "trsvcid": "59538" 00:14:52.700 }, 00:14:52.700 "auth": { 00:14:52.700 "state": "completed", 00:14:52.700 "digest": "sha512", 00:14:52.700 "dhgroup": "ffdhe2048" 00:14:52.700 } 00:14:52.700 } 00:14:52.700 ]' 00:14:52.700 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.700 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.700 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.958 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:52.958 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.958 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.958 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.958 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.216 06:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:14:53.783 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.783 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:53.783 06:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.783 06:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.783 06:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.783 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.783 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:53.783 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.041 06:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.299 00:14:54.299 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.299 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.299 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.557 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.557 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.557 06:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.557 06:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.557 06:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.557 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.557 { 00:14:54.557 "cntlid": 107, 00:14:54.557 "qid": 0, 00:14:54.557 "state": "enabled", 00:14:54.557 "thread": "nvmf_tgt_poll_group_000", 00:14:54.557 "listen_address": { 00:14:54.557 "trtype": "TCP", 00:14:54.557 "adrfam": "IPv4", 00:14:54.557 "traddr": "10.0.0.2", 00:14:54.557 "trsvcid": "4420" 00:14:54.557 }, 00:14:54.557 "peer_address": { 00:14:54.557 "trtype": "TCP", 00:14:54.557 "adrfam": "IPv4", 00:14:54.557 "traddr": "10.0.0.1", 00:14:54.557 "trsvcid": "59554" 00:14:54.557 }, 00:14:54.557 "auth": { 00:14:54.557 "state": "completed", 00:14:54.557 "digest": "sha512", 00:14:54.557 "dhgroup": "ffdhe2048" 00:14:54.557 } 00:14:54.557 } 00:14:54.557 ]' 00:14:54.557 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.815 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.815 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.815 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:54.815 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.815 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.815 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.815 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.072 06:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:14:55.637 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.637 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:55.637 06:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.637 06:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.637 06:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.637 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.637 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:55.637 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.895 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.153 00:14:56.153 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.153 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.153 06:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.411 { 00:14:56.411 "cntlid": 109, 00:14:56.411 "qid": 0, 00:14:56.411 "state": "enabled", 00:14:56.411 "thread": "nvmf_tgt_poll_group_000", 00:14:56.411 "listen_address": { 00:14:56.411 "trtype": "TCP", 00:14:56.411 "adrfam": "IPv4", 00:14:56.411 "traddr": "10.0.0.2", 00:14:56.411 "trsvcid": "4420" 00:14:56.411 }, 00:14:56.411 "peer_address": { 00:14:56.411 "trtype": "TCP", 00:14:56.411 "adrfam": "IPv4", 00:14:56.411 "traddr": "10.0.0.1", 00:14:56.411 "trsvcid": "59578" 00:14:56.411 }, 00:14:56.411 "auth": { 00:14:56.411 "state": "completed", 00:14:56.411 "digest": "sha512", 00:14:56.411 "dhgroup": "ffdhe2048" 00:14:56.411 } 00:14:56.411 } 00:14:56.411 ]' 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.411 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.670 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.670 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.670 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.670 06:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:14:57.236 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.236 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:57.236 06:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.236 06:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.236 06:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.236 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.236 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:57.236 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.805 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.805 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.061 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.061 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.061 06:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.061 06:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.319 06:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.319 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.319 { 00:14:58.319 "cntlid": 111, 00:14:58.319 "qid": 0, 00:14:58.319 "state": "enabled", 00:14:58.319 "thread": "nvmf_tgt_poll_group_000", 00:14:58.319 "listen_address": { 00:14:58.319 "trtype": "TCP", 00:14:58.319 "adrfam": "IPv4", 00:14:58.319 "traddr": "10.0.0.2", 00:14:58.319 "trsvcid": "4420" 00:14:58.319 }, 00:14:58.319 "peer_address": { 00:14:58.319 "trtype": "TCP", 00:14:58.319 "adrfam": "IPv4", 00:14:58.319 "traddr": "10.0.0.1", 00:14:58.319 "trsvcid": "59610" 00:14:58.319 }, 00:14:58.319 "auth": { 00:14:58.319 "state": "completed", 00:14:58.319 "digest": "sha512", 00:14:58.319 "dhgroup": "ffdhe2048" 00:14:58.319 } 00:14:58.319 } 00:14:58.319 ]' 00:14:58.319 06:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.319 06:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.319 06:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.319 06:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.319 06:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.319 06:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.319 06:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.319 06:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.576 06:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:14:59.141 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.141 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:14:59.141 06:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.141 06:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.398 06:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.398 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.398 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.398 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:59.398 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.656 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.925 00:14:59.925 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.925 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.925 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.205 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.205 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.205 06:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.205 06:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.205 06:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.205 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.205 { 00:15:00.205 "cntlid": 113, 00:15:00.205 "qid": 0, 00:15:00.205 "state": "enabled", 00:15:00.205 "thread": "nvmf_tgt_poll_group_000", 00:15:00.205 "listen_address": { 00:15:00.205 "trtype": "TCP", 00:15:00.205 "adrfam": "IPv4", 00:15:00.205 "traddr": "10.0.0.2", 00:15:00.205 "trsvcid": "4420" 00:15:00.205 }, 00:15:00.205 "peer_address": { 00:15:00.205 "trtype": "TCP", 00:15:00.205 "adrfam": "IPv4", 00:15:00.205 "traddr": "10.0.0.1", 00:15:00.205 "trsvcid": "59628" 00:15:00.205 }, 00:15:00.205 "auth": { 00:15:00.205 "state": "completed", 00:15:00.205 "digest": "sha512", 00:15:00.205 "dhgroup": "ffdhe3072" 00:15:00.205 } 00:15:00.205 } 00:15:00.205 ]' 00:15:00.205 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.205 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.205 06:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.205 06:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:00.205 06:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.205 06:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.205 06:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.205 06:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.770 06:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:15:01.348 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.348 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:01.348 06:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.348 06:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.348 06:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.348 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.348 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:01.348 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.611 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.869 00:15:01.869 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.869 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.869 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.127 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.128 { 00:15:02.128 "cntlid": 115, 00:15:02.128 "qid": 0, 00:15:02.128 "state": "enabled", 00:15:02.128 "thread": "nvmf_tgt_poll_group_000", 00:15:02.128 "listen_address": { 00:15:02.128 "trtype": "TCP", 00:15:02.128 "adrfam": "IPv4", 00:15:02.128 "traddr": "10.0.0.2", 00:15:02.128 "trsvcid": "4420" 00:15:02.128 }, 00:15:02.128 "peer_address": { 00:15:02.128 "trtype": "TCP", 00:15:02.128 "adrfam": "IPv4", 00:15:02.128 "traddr": "10.0.0.1", 00:15:02.128 "trsvcid": "58878" 00:15:02.128 }, 00:15:02.128 "auth": { 00:15:02.128 "state": "completed", 00:15:02.128 "digest": "sha512", 00:15:02.128 "dhgroup": "ffdhe3072" 00:15:02.128 } 00:15:02.128 } 00:15:02.128 ]' 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.128 06:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.386 06:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:15:03.320 06:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.320 06:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:03.320 06:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.320 06:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.320 06:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.320 06:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.320 06:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:03.320 06:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:03.320 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:03.320 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.320 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:03.320 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:03.320 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:03.320 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.321 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.321 06:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.321 06:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.321 06:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.321 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.321 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.887 00:15:03.887 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.887 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.887 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.145 { 00:15:04.145 "cntlid": 117, 00:15:04.145 "qid": 0, 00:15:04.145 "state": "enabled", 00:15:04.145 "thread": "nvmf_tgt_poll_group_000", 00:15:04.145 "listen_address": { 00:15:04.145 "trtype": "TCP", 00:15:04.145 "adrfam": "IPv4", 00:15:04.145 "traddr": "10.0.0.2", 00:15:04.145 "trsvcid": "4420" 00:15:04.145 }, 00:15:04.145 "peer_address": { 00:15:04.145 "trtype": "TCP", 00:15:04.145 "adrfam": "IPv4", 00:15:04.145 "traddr": "10.0.0.1", 00:15:04.145 "trsvcid": "58902" 00:15:04.145 }, 00:15:04.145 "auth": { 00:15:04.145 "state": "completed", 00:15:04.145 "digest": "sha512", 00:15:04.145 "dhgroup": "ffdhe3072" 00:15:04.145 } 00:15:04.145 } 00:15:04.145 ]' 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.145 06:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.402 06:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:15:04.968 06:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.968 06:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:04.968 06:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.968 06:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.968 06:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.968 06:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.968 06:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:04.968 06:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.226 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.493 00:15:05.751 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.751 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.751 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.010 { 00:15:06.010 "cntlid": 119, 00:15:06.010 "qid": 0, 00:15:06.010 "state": "enabled", 00:15:06.010 "thread": "nvmf_tgt_poll_group_000", 00:15:06.010 "listen_address": { 00:15:06.010 "trtype": "TCP", 00:15:06.010 "adrfam": "IPv4", 00:15:06.010 "traddr": "10.0.0.2", 00:15:06.010 "trsvcid": "4420" 00:15:06.010 }, 00:15:06.010 "peer_address": { 00:15:06.010 "trtype": "TCP", 00:15:06.010 "adrfam": "IPv4", 00:15:06.010 "traddr": "10.0.0.1", 00:15:06.010 "trsvcid": "58928" 00:15:06.010 }, 00:15:06.010 "auth": { 00:15:06.010 "state": "completed", 00:15:06.010 "digest": "sha512", 00:15:06.010 "dhgroup": "ffdhe3072" 00:15:06.010 } 00:15:06.010 } 00:15:06.010 ]' 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.010 06:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.268 06:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:15:07.205 06:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.205 06:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:07.205 06:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.205 06:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.205 06:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.205 06:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.205 06:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.205 06:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:07.205 06:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.205 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.772 00:15:07.772 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.772 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.772 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.772 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.772 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.772 06:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.772 06:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.772 06:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.772 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.772 { 00:15:07.772 "cntlid": 121, 00:15:07.772 "qid": 0, 00:15:07.772 "state": "enabled", 00:15:07.772 "thread": "nvmf_tgt_poll_group_000", 00:15:07.772 "listen_address": { 00:15:07.772 "trtype": "TCP", 00:15:07.772 "adrfam": "IPv4", 00:15:07.772 "traddr": "10.0.0.2", 00:15:07.772 "trsvcid": "4420" 00:15:07.772 }, 00:15:07.772 "peer_address": { 00:15:07.772 "trtype": "TCP", 00:15:07.772 "adrfam": "IPv4", 00:15:07.772 "traddr": "10.0.0.1", 00:15:07.773 "trsvcid": "58946" 00:15:07.773 }, 00:15:07.773 "auth": { 00:15:07.773 "state": "completed", 00:15:07.773 "digest": "sha512", 00:15:07.773 "dhgroup": "ffdhe4096" 00:15:07.773 } 00:15:07.773 } 00:15:07.773 ]' 00:15:07.773 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.032 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.032 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.032 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.032 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.032 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.032 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.032 06:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.291 06:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:15:08.858 06:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.858 06:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:08.858 06:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.858 06:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.858 06:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.858 06:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.858 06:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:08.858 06:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.116 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.683 00:15:09.683 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.683 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.683 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.683 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.683 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.683 06:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.683 06:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.942 06:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.942 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.942 { 00:15:09.942 "cntlid": 123, 00:15:09.942 "qid": 0, 00:15:09.942 "state": "enabled", 00:15:09.942 "thread": "nvmf_tgt_poll_group_000", 00:15:09.942 "listen_address": { 00:15:09.942 "trtype": "TCP", 00:15:09.942 "adrfam": "IPv4", 00:15:09.942 "traddr": "10.0.0.2", 00:15:09.942 "trsvcid": "4420" 00:15:09.942 }, 00:15:09.942 "peer_address": { 00:15:09.942 "trtype": "TCP", 00:15:09.942 "adrfam": "IPv4", 00:15:09.942 "traddr": "10.0.0.1", 00:15:09.942 "trsvcid": "58960" 00:15:09.942 }, 00:15:09.942 "auth": { 00:15:09.942 "state": "completed", 00:15:09.942 "digest": "sha512", 00:15:09.942 "dhgroup": "ffdhe4096" 00:15:09.942 } 00:15:09.942 } 00:15:09.942 ]' 00:15:09.942 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.942 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.942 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.942 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:09.942 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.942 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.942 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.942 06:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.200 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.136 06:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.136 06:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.136 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.136 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.703 00:15:11.703 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.703 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.703 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.703 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.703 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.703 06:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.703 06:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.703 06:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.703 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.703 { 00:15:11.703 "cntlid": 125, 00:15:11.703 "qid": 0, 00:15:11.703 "state": "enabled", 00:15:11.703 "thread": "nvmf_tgt_poll_group_000", 00:15:11.703 "listen_address": { 00:15:11.703 "trtype": "TCP", 00:15:11.703 "adrfam": "IPv4", 00:15:11.703 "traddr": "10.0.0.2", 00:15:11.703 "trsvcid": "4420" 00:15:11.703 }, 00:15:11.703 "peer_address": { 00:15:11.703 "trtype": "TCP", 00:15:11.703 "adrfam": "IPv4", 00:15:11.703 "traddr": "10.0.0.1", 00:15:11.703 "trsvcid": "59000" 00:15:11.703 }, 00:15:11.703 "auth": { 00:15:11.703 "state": "completed", 00:15:11.703 "digest": "sha512", 00:15:11.703 "dhgroup": "ffdhe4096" 00:15:11.703 } 00:15:11.703 } 00:15:11.703 ]' 00:15:11.703 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.962 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.962 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.962 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.962 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.962 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.962 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.962 06:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.220 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:15:12.786 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.786 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:12.786 06:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.786 06:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.786 06:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.786 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.786 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:12.786 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.045 06:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.303 00:15:13.303 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.303 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.303 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.561 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.561 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.561 06:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.561 06:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.561 06:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.561 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.561 { 00:15:13.561 "cntlid": 127, 00:15:13.561 "qid": 0, 00:15:13.561 "state": "enabled", 00:15:13.561 "thread": "nvmf_tgt_poll_group_000", 00:15:13.561 "listen_address": { 00:15:13.561 "trtype": "TCP", 00:15:13.561 "adrfam": "IPv4", 00:15:13.561 "traddr": "10.0.0.2", 00:15:13.561 "trsvcid": "4420" 00:15:13.561 }, 00:15:13.561 "peer_address": { 00:15:13.561 "trtype": "TCP", 00:15:13.561 "adrfam": "IPv4", 00:15:13.561 "traddr": "10.0.0.1", 00:15:13.561 "trsvcid": "44120" 00:15:13.561 }, 00:15:13.561 "auth": { 00:15:13.562 "state": "completed", 00:15:13.562 "digest": "sha512", 00:15:13.562 "dhgroup": "ffdhe4096" 00:15:13.562 } 00:15:13.562 } 00:15:13.562 ]' 00:15:13.562 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.562 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.562 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.819 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:13.819 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.819 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.819 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.819 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.077 06:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:15:14.643 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.643 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:14.643 06:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.643 06:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.643 06:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.643 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.643 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.644 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:14.644 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.901 06:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.158 00:15:15.417 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.417 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.417 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.417 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.417 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.417 06:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.417 06:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.417 06:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.417 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.417 { 00:15:15.417 "cntlid": 129, 00:15:15.417 "qid": 0, 00:15:15.417 "state": "enabled", 00:15:15.417 "thread": "nvmf_tgt_poll_group_000", 00:15:15.417 "listen_address": { 00:15:15.417 "trtype": "TCP", 00:15:15.417 "adrfam": "IPv4", 00:15:15.417 "traddr": "10.0.0.2", 00:15:15.417 "trsvcid": "4420" 00:15:15.417 }, 00:15:15.417 "peer_address": { 00:15:15.417 "trtype": "TCP", 00:15:15.417 "adrfam": "IPv4", 00:15:15.417 "traddr": "10.0.0.1", 00:15:15.417 "trsvcid": "44146" 00:15:15.417 }, 00:15:15.417 "auth": { 00:15:15.417 "state": "completed", 00:15:15.417 "digest": "sha512", 00:15:15.417 "dhgroup": "ffdhe6144" 00:15:15.417 } 00:15:15.417 } 00:15:15.417 ]' 00:15:15.417 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.675 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:15.675 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.675 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.675 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.675 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.675 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.675 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.933 06:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.875 06:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.442 00:15:17.442 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.442 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.442 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.700 { 00:15:17.700 "cntlid": 131, 00:15:17.700 "qid": 0, 00:15:17.700 "state": "enabled", 00:15:17.700 "thread": "nvmf_tgt_poll_group_000", 00:15:17.700 "listen_address": { 00:15:17.700 "trtype": "TCP", 00:15:17.700 "adrfam": "IPv4", 00:15:17.700 "traddr": "10.0.0.2", 00:15:17.700 "trsvcid": "4420" 00:15:17.700 }, 00:15:17.700 "peer_address": { 00:15:17.700 "trtype": "TCP", 00:15:17.700 "adrfam": "IPv4", 00:15:17.700 "traddr": "10.0.0.1", 00:15:17.700 "trsvcid": "44160" 00:15:17.700 }, 00:15:17.700 "auth": { 00:15:17.700 "state": "completed", 00:15:17.700 "digest": "sha512", 00:15:17.700 "dhgroup": "ffdhe6144" 00:15:17.700 } 00:15:17.700 } 00:15:17.700 ]' 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.700 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.957 06:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.888 06:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.453 00:15:19.453 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.453 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.453 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.711 { 00:15:19.711 "cntlid": 133, 00:15:19.711 "qid": 0, 00:15:19.711 "state": "enabled", 00:15:19.711 "thread": "nvmf_tgt_poll_group_000", 00:15:19.711 "listen_address": { 00:15:19.711 "trtype": "TCP", 00:15:19.711 "adrfam": "IPv4", 00:15:19.711 "traddr": "10.0.0.2", 00:15:19.711 "trsvcid": "4420" 00:15:19.711 }, 00:15:19.711 "peer_address": { 00:15:19.711 "trtype": "TCP", 00:15:19.711 "adrfam": "IPv4", 00:15:19.711 "traddr": "10.0.0.1", 00:15:19.711 "trsvcid": "44196" 00:15:19.711 }, 00:15:19.711 "auth": { 00:15:19.711 "state": "completed", 00:15:19.711 "digest": "sha512", 00:15:19.711 "dhgroup": "ffdhe6144" 00:15:19.711 } 00:15:19.711 } 00:15:19.711 ]' 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:19.711 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.969 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.969 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.969 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.227 06:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:15:20.792 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.793 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:20.793 06:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.793 06:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.793 06:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.793 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.793 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:20.793 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:21.051 06:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:21.619 00:15:21.619 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.619 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.619 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.877 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.877 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.877 06:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.877 06:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.877 06:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.877 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.877 { 00:15:21.877 "cntlid": 135, 00:15:21.877 "qid": 0, 00:15:21.877 "state": "enabled", 00:15:21.877 "thread": "nvmf_tgt_poll_group_000", 00:15:21.877 "listen_address": { 00:15:21.877 "trtype": "TCP", 00:15:21.877 "adrfam": "IPv4", 00:15:21.877 "traddr": "10.0.0.2", 00:15:21.877 "trsvcid": "4420" 00:15:21.877 }, 00:15:21.877 "peer_address": { 00:15:21.877 "trtype": "TCP", 00:15:21.877 "adrfam": "IPv4", 00:15:21.877 "traddr": "10.0.0.1", 00:15:21.877 "trsvcid": "44228" 00:15:21.877 }, 00:15:21.877 "auth": { 00:15:21.877 "state": "completed", 00:15:21.877 "digest": "sha512", 00:15:21.877 "dhgroup": "ffdhe6144" 00:15:21.877 } 00:15:21.877 } 00:15:21.877 ]' 00:15:21.877 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.877 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:21.877 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.136 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.136 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.136 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.136 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.136 06:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.394 06:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:15:22.960 06:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.960 06:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:22.960 06:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.960 06:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.960 06:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.960 06:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.960 06:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.960 06:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:22.961 06:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.219 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.786 00:15:24.044 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.044 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.044 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.302 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.302 06:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.302 06:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.302 06:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.302 06:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.302 06:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.302 { 00:15:24.302 "cntlid": 137, 00:15:24.302 "qid": 0, 00:15:24.302 "state": "enabled", 00:15:24.302 "thread": "nvmf_tgt_poll_group_000", 00:15:24.302 "listen_address": { 00:15:24.302 "trtype": "TCP", 00:15:24.302 "adrfam": "IPv4", 00:15:24.302 "traddr": "10.0.0.2", 00:15:24.302 "trsvcid": "4420" 00:15:24.302 }, 00:15:24.302 "peer_address": { 00:15:24.302 "trtype": "TCP", 00:15:24.302 "adrfam": "IPv4", 00:15:24.302 "traddr": "10.0.0.1", 00:15:24.302 "trsvcid": "36356" 00:15:24.302 }, 00:15:24.303 "auth": { 00:15:24.303 "state": "completed", 00:15:24.303 "digest": "sha512", 00:15:24.303 "dhgroup": "ffdhe8192" 00:15:24.303 } 00:15:24.303 } 00:15:24.303 ]' 00:15:24.303 06:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.303 06:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.303 06:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.303 06:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.303 06:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.303 06:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.303 06:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.303 06:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.562 06:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:15:25.129 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.129 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:25.129 06:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.129 06:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.129 06:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.129 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.129 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:25.129 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:25.387 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:15:25.387 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.387 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:25.387 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:25.387 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:25.387 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.387 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.387 06:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.387 06:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.645 06:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.645 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.645 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.213 00:15:26.213 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.213 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.213 06:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.472 { 00:15:26.472 "cntlid": 139, 00:15:26.472 "qid": 0, 00:15:26.472 "state": "enabled", 00:15:26.472 "thread": "nvmf_tgt_poll_group_000", 00:15:26.472 "listen_address": { 00:15:26.472 "trtype": "TCP", 00:15:26.472 "adrfam": "IPv4", 00:15:26.472 "traddr": "10.0.0.2", 00:15:26.472 "trsvcid": "4420" 00:15:26.472 }, 00:15:26.472 "peer_address": { 00:15:26.472 "trtype": "TCP", 00:15:26.472 "adrfam": "IPv4", 00:15:26.472 "traddr": "10.0.0.1", 00:15:26.472 "trsvcid": "36376" 00:15:26.472 }, 00:15:26.472 "auth": { 00:15:26.472 "state": "completed", 00:15:26.472 "digest": "sha512", 00:15:26.472 "dhgroup": "ffdhe8192" 00:15:26.472 } 00:15:26.472 } 00:15:26.472 ]' 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.472 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.731 06:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:01:MDBjZjk0ZTcyOTM0NjA4NjQ5N2EzZDAxMjA5ZmE1N2GLnmT/: --dhchap-ctrl-secret DHHC-1:02:NzZiNzUyYmU0ZmQ4YzBkMTc1MTJkMGQxMmIyNmY5MDY1N2FlMjZiNTgxZWYzNTE36z+iuQ==: 00:15:27.298 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.298 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:27.298 06:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.298 06:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.298 06:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.299 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.299 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:27.299 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:27.558 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:15:27.558 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.558 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:27.558 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:27.558 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:27.558 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.558 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.558 06:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.558 06:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.834 06:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.834 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.834 06:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.404 00:15:28.404 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.404 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.404 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.663 { 00:15:28.663 "cntlid": 141, 00:15:28.663 "qid": 0, 00:15:28.663 "state": "enabled", 00:15:28.663 "thread": "nvmf_tgt_poll_group_000", 00:15:28.663 "listen_address": { 00:15:28.663 "trtype": "TCP", 00:15:28.663 "adrfam": "IPv4", 00:15:28.663 "traddr": "10.0.0.2", 00:15:28.663 "trsvcid": "4420" 00:15:28.663 }, 00:15:28.663 "peer_address": { 00:15:28.663 "trtype": "TCP", 00:15:28.663 "adrfam": "IPv4", 00:15:28.663 "traddr": "10.0.0.1", 00:15:28.663 "trsvcid": "36408" 00:15:28.663 }, 00:15:28.663 "auth": { 00:15:28.663 "state": "completed", 00:15:28.663 "digest": "sha512", 00:15:28.663 "dhgroup": "ffdhe8192" 00:15:28.663 } 00:15:28.663 } 00:15:28.663 ]' 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.663 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.921 06:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:02:MmNlZDA2YTUxODJhZGE4NDlhZTU1NDQzYmIzZjhjOGE2M2Y5ODUxZGU0NzkyNTM3AdZqag==: --dhchap-ctrl-secret DHHC-1:01:OWI0MjBlZDM0ZGYzMGM1MDkyNjVmYjA5NjRiOWM2NjRdh/PY: 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:29.857 06:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:30.424 00:15:30.682 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.682 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.682 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.941 { 00:15:30.941 "cntlid": 143, 00:15:30.941 "qid": 0, 00:15:30.941 "state": "enabled", 00:15:30.941 "thread": "nvmf_tgt_poll_group_000", 00:15:30.941 "listen_address": { 00:15:30.941 "trtype": "TCP", 00:15:30.941 "adrfam": "IPv4", 00:15:30.941 "traddr": "10.0.0.2", 00:15:30.941 "trsvcid": "4420" 00:15:30.941 }, 00:15:30.941 "peer_address": { 00:15:30.941 "trtype": "TCP", 00:15:30.941 "adrfam": "IPv4", 00:15:30.941 "traddr": "10.0.0.1", 00:15:30.941 "trsvcid": "36430" 00:15:30.941 }, 00:15:30.941 "auth": { 00:15:30.941 "state": "completed", 00:15:30.941 "digest": "sha512", 00:15:30.941 "dhgroup": "ffdhe8192" 00:15:30.941 } 00:15:30.941 } 00:15:30.941 ]' 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.941 06:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.199 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:31.766 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.333 06:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.597 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.922 { 00:15:32.922 "cntlid": 145, 00:15:32.922 "qid": 0, 00:15:32.922 "state": "enabled", 00:15:32.922 "thread": "nvmf_tgt_poll_group_000", 00:15:32.922 "listen_address": { 00:15:32.922 "trtype": "TCP", 00:15:32.922 "adrfam": "IPv4", 00:15:32.922 "traddr": "10.0.0.2", 00:15:32.922 "trsvcid": "4420" 00:15:32.922 }, 00:15:32.922 "peer_address": { 00:15:32.922 "trtype": "TCP", 00:15:32.922 "adrfam": "IPv4", 00:15:32.922 "traddr": "10.0.0.1", 00:15:32.922 "trsvcid": "40762" 00:15:32.922 }, 00:15:32.922 "auth": { 00:15:32.922 "state": "completed", 00:15:32.922 "digest": "sha512", 00:15:32.922 "dhgroup": "ffdhe8192" 00:15:32.922 } 00:15:32.922 } 00:15:32.922 ]' 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.922 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.180 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:33.180 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.180 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.180 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.181 06:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.439 06:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:00:NmMyMWI5ZmZhNDVmMmM2NmM2YmI3NDljYWI1YjZmNTcxOWZhMzY1OTY2ZTJhYWFluTBjCg==: --dhchap-ctrl-secret DHHC-1:03:NjVmMDVmOGUwYjM1MWJmNzYxNDQ2ZmJjZmU2YWYwZGIyNDU5ZTlhZWVhMzRlM2Y5M2U5NzJiZjQ5MWI0NDhhZa/GZ70=: 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:34.005 06:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:34.940 request: 00:15:34.940 { 00:15:34.940 "name": "nvme0", 00:15:34.940 "trtype": "tcp", 00:15:34.940 "traddr": "10.0.0.2", 00:15:34.940 "adrfam": "ipv4", 00:15:34.940 "trsvcid": "4420", 00:15:34.940 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:34.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368", 00:15:34.940 "prchk_reftag": false, 00:15:34.940 "prchk_guard": false, 00:15:34.940 "hdgst": false, 00:15:34.940 "ddgst": false, 00:15:34.940 "dhchap_key": "key2", 00:15:34.940 "method": "bdev_nvme_attach_controller", 00:15:34.940 "req_id": 1 00:15:34.940 } 00:15:34.940 Got JSON-RPC error response 00:15:34.940 response: 00:15:34.940 { 00:15:34.940 "code": -5, 00:15:34.940 "message": "Input/output error" 00:15:34.940 } 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:34.940 06:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:35.199 request: 00:15:35.199 { 00:15:35.199 "name": "nvme0", 00:15:35.199 "trtype": "tcp", 00:15:35.199 "traddr": "10.0.0.2", 00:15:35.199 "adrfam": "ipv4", 00:15:35.199 "trsvcid": "4420", 00:15:35.199 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:35.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368", 00:15:35.199 "prchk_reftag": false, 00:15:35.199 "prchk_guard": false, 00:15:35.199 "hdgst": false, 00:15:35.199 "ddgst": false, 00:15:35.199 "dhchap_key": "key1", 00:15:35.199 "dhchap_ctrlr_key": "ckey2", 00:15:35.199 "method": "bdev_nvme_attach_controller", 00:15:35.199 "req_id": 1 00:15:35.199 } 00:15:35.199 Got JSON-RPC error response 00:15:35.199 response: 00:15:35.199 { 00:15:35.199 "code": -5, 00:15:35.199 "message": "Input/output error" 00:15:35.199 } 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key1 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.457 06:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.024 request: 00:15:36.024 { 00:15:36.024 "name": "nvme0", 00:15:36.024 "trtype": "tcp", 00:15:36.024 "traddr": "10.0.0.2", 00:15:36.024 "adrfam": "ipv4", 00:15:36.024 "trsvcid": "4420", 00:15:36.024 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:36.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368", 00:15:36.024 "prchk_reftag": false, 00:15:36.024 "prchk_guard": false, 00:15:36.024 "hdgst": false, 00:15:36.024 "ddgst": false, 00:15:36.024 "dhchap_key": "key1", 00:15:36.024 "dhchap_ctrlr_key": "ckey1", 00:15:36.024 "method": "bdev_nvme_attach_controller", 00:15:36.024 "req_id": 1 00:15:36.024 } 00:15:36.024 Got JSON-RPC error response 00:15:36.024 response: 00:15:36.024 { 00:15:36.024 "code": -5, 00:15:36.024 "message": "Input/output error" 00:15:36.024 } 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 72091 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72091 ']' 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72091 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:36.024 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72091 00:15:36.024 killing process with pid 72091 00:15:36.025 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:36.025 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:36.025 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72091' 00:15:36.025 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72091 00:15:36.025 06:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72091 00:15:36.959 06:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=75013 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 75013 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 75013 ']' 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.960 06:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:37.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 75013 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 75013 ']' 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.893 06:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.152 06:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.152 06:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:38.152 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:15:38.152 06:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.152 06:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.718 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:39.283 00:15:39.283 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.283 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.283 06:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.542 { 00:15:39.542 "cntlid": 1, 00:15:39.542 "qid": 0, 00:15:39.542 "state": "enabled", 00:15:39.542 "thread": "nvmf_tgt_poll_group_000", 00:15:39.542 "listen_address": { 00:15:39.542 "trtype": "TCP", 00:15:39.542 "adrfam": "IPv4", 00:15:39.542 "traddr": "10.0.0.2", 00:15:39.542 "trsvcid": "4420" 00:15:39.542 }, 00:15:39.542 "peer_address": { 00:15:39.542 "trtype": "TCP", 00:15:39.542 "adrfam": "IPv4", 00:15:39.542 "traddr": "10.0.0.1", 00:15:39.542 "trsvcid": "40804" 00:15:39.542 }, 00:15:39.542 "auth": { 00:15:39.542 "state": "completed", 00:15:39.542 "digest": "sha512", 00:15:39.542 "dhgroup": "ffdhe8192" 00:15:39.542 } 00:15:39.542 } 00:15:39.542 ]' 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.542 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.800 06:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid 8738190a-dd44-4449-9019-403e2a10a368 --dhchap-secret DHHC-1:03:YmI0ZDc4NDUxOGRlMTRiOTUyY2ZkNDEwMmIxODQ5ZGQ3YmIxMDgxZmEwZWY1M2MwZWQ3ZTNjZDE4ZjNiNmMwOOt3N98=: 00:15:40.367 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --dhchap-key key3 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.626 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.193 request: 00:15:41.193 { 00:15:41.193 "name": "nvme0", 00:15:41.193 "trtype": "tcp", 00:15:41.193 "traddr": "10.0.0.2", 00:15:41.193 "adrfam": "ipv4", 00:15:41.193 "trsvcid": "4420", 00:15:41.193 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:41.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368", 00:15:41.193 "prchk_reftag": false, 00:15:41.193 "prchk_guard": false, 00:15:41.193 "hdgst": false, 00:15:41.193 "ddgst": false, 00:15:41.193 "dhchap_key": "key3", 00:15:41.193 "method": "bdev_nvme_attach_controller", 00:15:41.193 "req_id": 1 00:15:41.193 } 00:15:41.193 Got JSON-RPC error response 00:15:41.193 response: 00:15:41.193 { 00:15:41.193 "code": -5, 00:15:41.193 "message": "Input/output error" 00:15:41.193 } 00:15:41.193 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:41.193 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:41.193 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:41.193 06:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:41.193 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:15:41.193 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:15:41.193 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:41.193 06:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:41.193 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.193 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:41.193 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.193 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:41.193 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.193 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:41.193 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.193 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.193 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.452 request: 00:15:41.452 { 00:15:41.452 "name": "nvme0", 00:15:41.452 "trtype": "tcp", 00:15:41.452 "traddr": "10.0.0.2", 00:15:41.452 "adrfam": "ipv4", 00:15:41.452 "trsvcid": "4420", 00:15:41.452 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:41.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368", 00:15:41.452 "prchk_reftag": false, 00:15:41.452 "prchk_guard": false, 00:15:41.452 "hdgst": false, 00:15:41.452 "ddgst": false, 00:15:41.452 "dhchap_key": "key3", 00:15:41.452 "method": "bdev_nvme_attach_controller", 00:15:41.452 "req_id": 1 00:15:41.452 } 00:15:41.452 Got JSON-RPC error response 00:15:41.452 response: 00:15:41.452 { 00:15:41.452 "code": -5, 00:15:41.452 "message": "Input/output error" 00:15:41.452 } 00:15:41.452 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:41.452 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:41.452 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:41.452 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:41.452 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:41.452 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:15:41.452 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:41.452 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:41.452 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:41.452 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:41.711 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:41.969 request: 00:15:41.969 { 00:15:41.969 "name": "nvme0", 00:15:41.969 "trtype": "tcp", 00:15:41.969 "traddr": "10.0.0.2", 00:15:41.969 "adrfam": "ipv4", 00:15:41.969 "trsvcid": "4420", 00:15:41.969 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:41.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368", 00:15:41.969 "prchk_reftag": false, 00:15:41.969 "prchk_guard": false, 00:15:41.969 "hdgst": false, 00:15:41.969 "ddgst": false, 00:15:41.969 "dhchap_key": "key0", 00:15:41.969 "dhchap_ctrlr_key": "key1", 00:15:41.969 "method": "bdev_nvme_attach_controller", 00:15:41.969 "req_id": 1 00:15:41.969 } 00:15:41.969 Got JSON-RPC error response 00:15:41.969 response: 00:15:41.969 { 00:15:41.969 "code": -5, 00:15:41.969 "message": "Input/output error" 00:15:41.969 } 00:15:41.969 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:41.969 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:41.969 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:41.969 06:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:41.969 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:41.970 06:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:42.228 00:15:42.228 06:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:15:42.228 06:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:15:42.228 06:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.486 06:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.486 06:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.486 06:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 72119 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72119 ']' 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72119 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72119 00:15:43.053 killing process with pid 72119 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72119' 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72119 00:15:43.053 06:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72119 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.976 rmmod nvme_tcp 00:15:44.976 rmmod nvme_fabrics 00:15:44.976 rmmod nvme_keyring 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 75013 ']' 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 75013 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 75013 ']' 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 75013 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75013 00:15:44.976 killing process with pid 75013 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75013' 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 75013 00:15:44.976 06:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 75013 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Cy5 /tmp/spdk.key-sha256.ilI /tmp/spdk.key-sha384.xUu /tmp/spdk.key-sha512.X7U /tmp/spdk.key-sha512.i1N /tmp/spdk.key-sha384.RWM /tmp/spdk.key-sha256.ssj '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:46.349 00:15:46.349 real 2m43.324s 00:15:46.349 user 6m29.481s 00:15:46.349 sys 0m22.905s 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.349 ************************************ 00:15:46.349 END TEST nvmf_auth_target 00:15:46.349 ************************************ 00:15:46.349 06:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.349 06:02:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:46.349 06:02:01 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:15:46.349 06:02:01 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:46.349 06:02:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:46.349 06:02:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.349 06:02:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:46.349 ************************************ 00:15:46.349 START TEST nvmf_bdevio_no_huge 00:15:46.349 ************************************ 00:15:46.349 06:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:46.349 * Looking for test storage... 00:15:46.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.349 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:46.350 Cannot find device "nvmf_tgt_br" 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.350 Cannot find device "nvmf_tgt_br2" 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:46.350 Cannot find device "nvmf_tgt_br" 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:46.350 Cannot find device "nvmf_tgt_br2" 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.350 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:46.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:46.608 00:15:46.608 --- 10.0.0.2 ping statistics --- 00:15:46.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.608 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:46.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:46.608 00:15:46.608 --- 10.0.0.3 ping statistics --- 00:15:46.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.608 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:46.608 00:15:46.608 --- 10.0.0.1 ping statistics --- 00:15:46.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.608 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=75363 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 75363 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 75363 ']' 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.608 06:02:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.866 [2024-07-11 06:02:02.581631] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:15:46.866 [2024-07-11 06:02:02.581865] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:47.124 [2024-07-11 06:02:02.789240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.124 [2024-07-11 06:02:03.021567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.124 [2024-07-11 06:02:03.021666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.124 [2024-07-11 06:02:03.021694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.124 [2024-07-11 06:02:03.021706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.124 [2024-07-11 06:02:03.021719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.124 [2024-07-11 06:02:03.021933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:47.124 [2024-07-11 06:02:03.022253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:47.124 [2024-07-11 06:02:03.022333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.124 [2024-07-11 06:02:03.022335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:47.382 [2024-07-11 06:02:03.158692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:47.640 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.640 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:15:47.640 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:47.640 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.640 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.640 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.640 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:47.640 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.640 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.640 [2024-07-11 06:02:03.555447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.898 Malloc0 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.898 [2024-07-11 06:02:03.644277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:47.898 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:47.898 { 00:15:47.898 "params": { 00:15:47.898 "name": "Nvme$subsystem", 00:15:47.898 "trtype": "$TEST_TRANSPORT", 00:15:47.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:47.898 "adrfam": "ipv4", 00:15:47.898 "trsvcid": "$NVMF_PORT", 00:15:47.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:47.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:47.898 "hdgst": ${hdgst:-false}, 00:15:47.898 "ddgst": ${ddgst:-false} 00:15:47.898 }, 00:15:47.899 "method": "bdev_nvme_attach_controller" 00:15:47.899 } 00:15:47.899 EOF 00:15:47.899 )") 00:15:47.899 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:15:47.899 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:15:47.899 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:15:47.899 06:02:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:47.899 "params": { 00:15:47.899 "name": "Nvme1", 00:15:47.899 "trtype": "tcp", 00:15:47.899 "traddr": "10.0.0.2", 00:15:47.899 "adrfam": "ipv4", 00:15:47.899 "trsvcid": "4420", 00:15:47.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:47.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:47.899 "hdgst": false, 00:15:47.899 "ddgst": false 00:15:47.899 }, 00:15:47.899 "method": "bdev_nvme_attach_controller" 00:15:47.899 }' 00:15:47.899 [2024-07-11 06:02:03.754036] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:15:47.899 [2024-07-11 06:02:03.754293] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75399 ] 00:15:48.157 [2024-07-11 06:02:03.956730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.414 [2024-07-11 06:02:04.244349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.414 [2024-07-11 06:02:04.244453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.414 [2024-07-11 06:02:04.244737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.672 [2024-07-11 06:02:04.404027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:48.930 I/O targets: 00:15:48.930 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:48.930 00:15:48.930 00:15:48.930 CUnit - A unit testing framework for C - Version 2.1-3 00:15:48.930 http://cunit.sourceforge.net/ 00:15:48.930 00:15:48.930 00:15:48.930 Suite: bdevio tests on: Nvme1n1 00:15:48.930 Test: blockdev write read block ...passed 00:15:48.930 Test: blockdev write zeroes read block ...passed 00:15:48.930 Test: blockdev write zeroes read no split ...passed 00:15:48.930 Test: blockdev write zeroes read split ...passed 00:15:48.930 Test: blockdev write zeroes read split partial ...passed 00:15:48.930 Test: blockdev reset ...[2024-07-11 06:02:04.674597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:48.930 [2024-07-11 06:02:04.674948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:15:48.930 [2024-07-11 06:02:04.691916] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:48.930 passed 00:15:48.930 Test: blockdev write read 8 blocks ...passed 00:15:48.930 Test: blockdev write read size > 128k ...passed 00:15:48.930 Test: blockdev write read invalid size ...passed 00:15:48.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.930 Test: blockdev write read max offset ...passed 00:15:48.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.930 Test: blockdev writev readv 8 blocks ...passed 00:15:48.930 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.930 Test: blockdev writev readv block ...passed 00:15:48.930 Test: blockdev writev readv size > 128k ...passed 00:15:48.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.930 Test: blockdev comparev and writev ...[2024-07-11 06:02:04.706201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.931 [2024-07-11 06:02:04.706284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:48.931 [2024-07-11 06:02:04.706317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.931 [2024-07-11 06:02:04.706338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:48.931 [2024-07-11 06:02:04.706802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.931 [2024-07-11 06:02:04.706852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:48.931 [2024-07-11 06:02:04.706883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.931 [2024-07-11 06:02:04.706904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:48.931 [2024-07-11 06:02:04.707374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.931 [2024-07-11 06:02:04.707437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:48.931 [2024-07-11 06:02:04.707466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.931 [2024-07-11 06:02:04.707488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:48.931 [2024-07-11 06:02:04.708019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.931 [2024-07-11 06:02:04.708090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:48.931 [2024-07-11 06:02:04.708121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:48.931 [2024-07-11 06:02:04.708141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:48.931 passed 00:15:48.931 Test: blockdev nvme passthru rw ...passed 00:15:48.931 Test: blockdev nvme passthru vendor specific ...[2024-07-11 06:02:04.709830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.931 [2024-07-11 06:02:04.709973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:48.931 [2024-07-11 06:02:04.710375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.931 [2024-07-11 06:02:04.710438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:48.931 [2024-07-11 06:02:04.710803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.931 [2024-07-11 06:02:04.710851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:48.931 [2024-07-11 06:02:04.711245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.931 [2024-07-11 06:02:04.711291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:48.931 passed 00:15:48.931 Test: blockdev nvme admin passthru ...passed 00:15:48.931 Test: blockdev copy ...passed 00:15:48.931 00:15:48.931 Run Summary: Type Total Ran Passed Failed Inactive 00:15:48.931 suites 1 1 n/a 0 0 00:15:48.931 tests 23 23 23 0 0 00:15:48.931 asserts 152 152 152 0 n/a 00:15:48.931 00:15:48.931 Elapsed time = 0.234 seconds 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.866 rmmod nvme_tcp 00:15:49.866 rmmod nvme_fabrics 00:15:49.866 rmmod nvme_keyring 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 75363 ']' 00:15:49.866 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 75363 00:15:49.867 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 75363 ']' 00:15:49.867 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 75363 00:15:49.867 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:15:49.867 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.867 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75363 00:15:49.867 killing process with pid 75363 00:15:49.867 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:49.867 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:49.867 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75363' 00:15:49.867 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 75363 00:15:49.867 06:02:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 75363 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:50.802 ************************************ 00:15:50.802 END TEST nvmf_bdevio_no_huge 00:15:50.802 ************************************ 00:15:50.802 00:15:50.802 real 0m4.417s 00:15:50.802 user 0m15.726s 00:15:50.802 sys 0m1.296s 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.802 06:02:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:50.802 06:02:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:50.802 06:02:06 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:50.802 06:02:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:50.802 06:02:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.802 06:02:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:50.802 ************************************ 00:15:50.802 START TEST nvmf_tls 00:15:50.802 ************************************ 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:50.802 * Looking for test storage... 00:15:50.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.802 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:50.803 Cannot find device "nvmf_tgt_br" 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.803 Cannot find device "nvmf_tgt_br2" 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:50.803 Cannot find device "nvmf_tgt_br" 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:50.803 Cannot find device "nvmf_tgt_br2" 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:50.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:50.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:50.803 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:51.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:51.061 00:15:51.061 --- 10.0.0.2 ping statistics --- 00:15:51.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.061 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:51.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:51.061 00:15:51.061 --- 10.0.0.3 ping statistics --- 00:15:51.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.061 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:51.061 00:15:51.061 --- 10.0.0.1 ping statistics --- 00:15:51.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.061 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=75615 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 75615 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 75615 ']' 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.061 06:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.319 [2024-07-11 06:02:07.063290] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:15:51.319 [2024-07-11 06:02:07.063464] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.624 [2024-07-11 06:02:07.240902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.624 [2024-07-11 06:02:07.482129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.624 [2024-07-11 06:02:07.482204] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.624 [2024-07-11 06:02:07.482224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.624 [2024-07-11 06:02:07.482241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.624 [2024-07-11 06:02:07.482265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.624 [2024-07-11 06:02:07.482315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.189 06:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.189 06:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:52.189 06:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.189 06:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.189 06:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.189 06:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.189 06:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:15:52.189 06:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:52.447 true 00:15:52.447 06:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:52.447 06:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:15:52.705 06:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:15:52.705 06:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:15:52.705 06:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:52.963 06:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:15:52.963 06:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:53.221 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:15:53.221 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:15:53.221 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:53.479 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:53.479 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:15:53.736 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:15:53.736 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:15:53.736 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:15:53.736 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:53.993 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:15:53.993 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:15:53.993 06:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:54.251 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:15:54.251 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:54.509 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:15:54.509 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:15:54.509 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:54.767 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:54.767 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:15:55.025 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:15:55.025 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:15:55.025 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:55.025 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:55.025 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:55.025 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:55.025 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:15:55.025 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:55.025 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:55.283 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:55.283 06:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:55.283 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:55.283 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:55.283 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:55.283 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:15:55.283 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:55.283 06:02:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:55.283 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:55.283 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:15:55.283 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.M5vEkY7P4o 00:15:55.283 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:55.283 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.w223dVCb5h 00:15:55.283 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:55.283 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:55.283 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.M5vEkY7P4o 00:15:55.283 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.w223dVCb5h 00:15:55.283 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:55.540 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:56.107 [2024-07-11 06:02:11.741408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:56.107 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.M5vEkY7P4o 00:15:56.107 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.M5vEkY7P4o 00:15:56.107 06:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:56.364 [2024-07-11 06:02:12.119412] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.365 06:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:56.623 06:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:56.881 [2024-07-11 06:02:12.599477] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:56.881 [2024-07-11 06:02:12.599763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.881 06:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:57.139 malloc0 00:15:57.139 06:02:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:57.397 06:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.M5vEkY7P4o 00:15:57.655 [2024-07-11 06:02:13.456724] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:57.655 06:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.M5vEkY7P4o 00:16:09.854 Initializing NVMe Controllers 00:16:09.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:09.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:09.854 Initialization complete. Launching workers. 00:16:09.854 ======================================================== 00:16:09.854 Latency(us) 00:16:09.854 Device Information : IOPS MiB/s Average min max 00:16:09.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6609.40 25.82 9686.10 2472.30 22582.20 00:16:09.854 ======================================================== 00:16:09.854 Total : 6609.40 25.82 9686.10 2472.30 22582.20 00:16:09.854 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.M5vEkY7P4o 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.M5vEkY7P4o' 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75857 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75857 /var/tmp/bdevperf.sock 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 75857 ']' 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.854 06:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:09.854 [2024-07-11 06:02:23.905065] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:09.854 [2024-07-11 06:02:23.905257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75857 ] 00:16:09.854 [2024-07-11 06:02:24.079502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.854 [2024-07-11 06:02:24.275430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.854 [2024-07-11 06:02:24.438953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:09.854 06:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.854 06:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:09.854 06:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.M5vEkY7P4o 00:16:09.854 [2024-07-11 06:02:24.969421] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:09.854 [2024-07-11 06:02:24.969615] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:09.854 TLSTESTn1 00:16:09.854 06:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:09.854 Running I/O for 10 seconds... 00:16:19.823 00:16:19.823 Latency(us) 00:16:19.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.823 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:19.823 Verification LBA range: start 0x0 length 0x2000 00:16:19.823 TLSTESTn1 : 10.02 2926.15 11.43 0.00 0.00 43649.52 10426.18 89605.59 00:16:19.823 =================================================================================================================== 00:16:19.823 Total : 2926.15 11.43 0.00 0.00 43649.52 10426.18 89605.59 00:16:19.823 0 00:16:19.823 06:02:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:19.823 06:02:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 75857 00:16:19.823 06:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 75857 ']' 00:16:19.823 06:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 75857 00:16:19.823 06:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:19.823 06:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:19.823 06:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75857 00:16:19.823 killing process with pid 75857 00:16:19.823 Received shutdown signal, test time was about 10.000000 seconds 00:16:19.824 00:16:19.824 Latency(us) 00:16:19.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.824 =================================================================================================================== 00:16:19.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:19.824 06:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:19.824 06:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:19.824 06:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75857' 00:16:19.824 06:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 75857 00:16:19.824 [2024-07-11 06:02:35.259144] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:19.824 06:02:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 75857 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.w223dVCb5h 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.w223dVCb5h 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.w223dVCb5h 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.w223dVCb5h' 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75992 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75992 /var/tmp/bdevperf.sock 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 75992 ']' 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:20.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.759 06:02:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:20.759 [2024-07-11 06:02:36.435832] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:20.759 [2024-07-11 06:02:36.436024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75992 ] 00:16:20.759 [2024-07-11 06:02:36.609679] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.018 [2024-07-11 06:02:36.796688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.276 [2024-07-11 06:02:36.984264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:21.534 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.534 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:21.534 06:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.w223dVCb5h 00:16:21.793 [2024-07-11 06:02:37.575014] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:21.793 [2024-07-11 06:02:37.575179] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:21.793 [2024-07-11 06:02:37.589250] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:21.793 [2024-07-11 06:02:37.589440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:21.793 [2024-07-11 06:02:37.590384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:21.793 [2024-07-11 06:02:37.591384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:21.793 [2024-07-11 06:02:37.591431] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:21.793 [2024-07-11 06:02:37.591453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:21.793 request: 00:16:21.793 { 00:16:21.793 "name": "TLSTEST", 00:16:21.793 "trtype": "tcp", 00:16:21.793 "traddr": "10.0.0.2", 00:16:21.793 "adrfam": "ipv4", 00:16:21.793 "trsvcid": "4420", 00:16:21.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.793 "prchk_reftag": false, 00:16:21.793 "prchk_guard": false, 00:16:21.793 "hdgst": false, 00:16:21.793 "ddgst": false, 00:16:21.793 "psk": "/tmp/tmp.w223dVCb5h", 00:16:21.793 "method": "bdev_nvme_attach_controller", 00:16:21.793 "req_id": 1 00:16:21.793 } 00:16:21.793 Got JSON-RPC error response 00:16:21.793 response: 00:16:21.793 { 00:16:21.793 "code": -5, 00:16:21.793 "message": "Input/output error" 00:16:21.793 } 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 75992 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 75992 ']' 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 75992 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75992 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75992' 00:16:21.793 killing process with pid 75992 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 75992 00:16:21.793 Received shutdown signal, test time was about 10.000000 seconds 00:16:21.793 00:16:21.793 Latency(us) 00:16:21.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.793 =================================================================================================================== 00:16:21.793 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:21.793 [2024-07-11 06:02:37.643305] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:21.793 06:02:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 75992 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.M5vEkY7P4o 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.M5vEkY7P4o 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:23.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.M5vEkY7P4o 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.M5vEkY7P4o' 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76026 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76026 /var/tmp/bdevperf.sock 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76026 ']' 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.212 06:02:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.212 [2024-07-11 06:02:38.825829] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:23.212 [2024-07-11 06:02:38.826006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76026 ] 00:16:23.212 [2024-07-11 06:02:38.998527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.470 [2024-07-11 06:02:39.199588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.470 [2024-07-11 06:02:39.370151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:24.036 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.036 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:24.036 06:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.M5vEkY7P4o 00:16:24.036 [2024-07-11 06:02:39.941518] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:24.036 [2024-07-11 06:02:39.941750] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:24.036 [2024-07-11 06:02:39.951458] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:24.036 [2024-07-11 06:02:39.951544] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:24.036 [2024-07-11 06:02:39.951615] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:24.036 [2024-07-11 06:02:39.951727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:24.036 [2024-07-11 06:02:39.952682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:24.036 [2024-07-11 06:02:39.953680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:24.036 [2024-07-11 06:02:39.953730] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:24.036 [2024-07-11 06:02:39.953759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:24.295 request: 00:16:24.295 { 00:16:24.295 "name": "TLSTEST", 00:16:24.295 "trtype": "tcp", 00:16:24.295 "traddr": "10.0.0.2", 00:16:24.295 "adrfam": "ipv4", 00:16:24.295 "trsvcid": "4420", 00:16:24.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.295 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:24.295 "prchk_reftag": false, 00:16:24.295 "prchk_guard": false, 00:16:24.295 "hdgst": false, 00:16:24.295 "ddgst": false, 00:16:24.295 "psk": "/tmp/tmp.M5vEkY7P4o", 00:16:24.295 "method": "bdev_nvme_attach_controller", 00:16:24.295 "req_id": 1 00:16:24.295 } 00:16:24.295 Got JSON-RPC error response 00:16:24.295 response: 00:16:24.295 { 00:16:24.295 "code": -5, 00:16:24.295 "message": "Input/output error" 00:16:24.295 } 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76026 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76026 ']' 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76026 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76026 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:24.295 killing process with pid 76026 00:16:24.295 Received shutdown signal, test time was about 10.000000 seconds 00:16:24.295 00:16:24.295 Latency(us) 00:16:24.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.295 =================================================================================================================== 00:16:24.295 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76026' 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76026 00:16:24.295 06:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76026 00:16:24.295 [2024-07-11 06:02:39.999504] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.M5vEkY7P4o 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.M5vEkY7P4o 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.M5vEkY7P4o 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.M5vEkY7P4o' 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76066 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76066 /var/tmp/bdevperf.sock 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76066 ']' 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.256 06:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.256 [2024-07-11 06:02:41.118610] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:25.256 [2024-07-11 06:02:41.118813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76066 ] 00:16:25.518 [2024-07-11 06:02:41.291422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.777 [2024-07-11 06:02:41.458359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.777 [2024-07-11 06:02:41.613057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:26.344 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.344 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:26.344 06:02:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.M5vEkY7P4o 00:16:26.344 [2024-07-11 06:02:42.241471] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:26.344 [2024-07-11 06:02:42.241644] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:26.344 [2024-07-11 06:02:42.250451] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:26.344 [2024-07-11 06:02:42.250510] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:26.344 [2024-07-11 06:02:42.250574] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:26.344 [2024-07-11 06:02:42.251133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:26.345 [2024-07-11 06:02:42.251793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:26.345 [2024-07-11 06:02:42.252796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:26.345 [2024-07-11 06:02:42.252835] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:26.345 [2024-07-11 06:02:42.252873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:26.345 request: 00:16:26.345 { 00:16:26.345 "name": "TLSTEST", 00:16:26.345 "trtype": "tcp", 00:16:26.345 "traddr": "10.0.0.2", 00:16:26.345 "adrfam": "ipv4", 00:16:26.345 "trsvcid": "4420", 00:16:26.345 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:26.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:26.345 "prchk_reftag": false, 00:16:26.345 "prchk_guard": false, 00:16:26.345 "hdgst": false, 00:16:26.345 "ddgst": false, 00:16:26.345 "psk": "/tmp/tmp.M5vEkY7P4o", 00:16:26.345 "method": "bdev_nvme_attach_controller", 00:16:26.345 "req_id": 1 00:16:26.345 } 00:16:26.345 Got JSON-RPC error response 00:16:26.345 response: 00:16:26.345 { 00:16:26.345 "code": -5, 00:16:26.345 "message": "Input/output error" 00:16:26.345 } 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76066 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76066 ']' 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76066 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76066 00:16:26.604 killing process with pid 76066 00:16:26.604 Received shutdown signal, test time was about 10.000000 seconds 00:16:26.604 00:16:26.604 Latency(us) 00:16:26.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.604 =================================================================================================================== 00:16:26.604 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76066' 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76066 00:16:26.604 [2024-07-11 06:02:42.301628] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:26.604 06:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76066 00:16:27.540 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:27.540 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:27.540 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:27.540 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:27.540 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:27.540 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76100 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76100 /var/tmp/bdevperf.sock 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76100 ']' 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.541 06:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.541 [2024-07-11 06:02:43.408953] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:27.541 [2024-07-11 06:02:43.409159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76100 ] 00:16:27.812 [2024-07-11 06:02:43.581139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.074 [2024-07-11 06:02:43.757501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.074 [2024-07-11 06:02:43.914487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:28.641 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.641 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:28.641 06:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:28.641 [2024-07-11 06:02:44.519818] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:28.641 [2024-07-11 06:02:44.521408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:16:28.641 [2024-07-11 06:02:44.522404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:28.641 [2024-07-11 06:02:44.522462] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:28.641 [2024-07-11 06:02:44.522517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:28.641 request: 00:16:28.641 { 00:16:28.641 "name": "TLSTEST", 00:16:28.641 "trtype": "tcp", 00:16:28.641 "traddr": "10.0.0.2", 00:16:28.641 "adrfam": "ipv4", 00:16:28.641 "trsvcid": "4420", 00:16:28.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.641 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.641 "prchk_reftag": false, 00:16:28.641 "prchk_guard": false, 00:16:28.641 "hdgst": false, 00:16:28.641 "ddgst": false, 00:16:28.641 "method": "bdev_nvme_attach_controller", 00:16:28.641 "req_id": 1 00:16:28.641 } 00:16:28.641 Got JSON-RPC error response 00:16:28.641 response: 00:16:28.641 { 00:16:28.641 "code": -5, 00:16:28.641 "message": "Input/output error" 00:16:28.641 } 00:16:28.641 06:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76100 00:16:28.641 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76100 ']' 00:16:28.641 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76100 00:16:28.641 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:28.900 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:28.900 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76100 00:16:28.900 killing process with pid 76100 00:16:28.900 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.900 00:16:28.900 Latency(us) 00:16:28.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.900 =================================================================================================================== 00:16:28.900 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:28.900 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:28.900 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:28.900 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76100' 00:16:28.900 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76100 00:16:28.900 06:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76100 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 75615 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 75615 ']' 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 75615 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75615 00:16:29.836 killing process with pid 75615 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75615' 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 75615 00:16:29.836 [2024-07-11 06:02:45.593553] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:29.836 06:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 75615 00:16:30.771 06:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:30.771 06:02:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:30.771 06:02:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:30.771 06:02:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:30.771 06:02:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:30.771 06:02:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:16:30.771 06:02:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.rZdvdJgnam 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.rZdvdJgnam 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76158 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76158 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76158 ']' 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.030 06:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.030 [2024-07-11 06:02:46.852982] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:31.030 [2024-07-11 06:02:46.853167] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.289 [2024-07-11 06:02:47.030953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.548 [2024-07-11 06:02:47.251819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.548 [2024-07-11 06:02:47.251890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.548 [2024-07-11 06:02:47.251906] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.548 [2024-07-11 06:02:47.251918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.548 [2024-07-11 06:02:47.251928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.548 [2024-07-11 06:02:47.251964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.548 [2024-07-11 06:02:47.407893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:32.116 06:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.116 06:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:32.116 06:02:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:32.116 06:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:32.116 06:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.116 06:02:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.116 06:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.rZdvdJgnam 00:16:32.116 06:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rZdvdJgnam 00:16:32.116 06:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:32.116 [2024-07-11 06:02:48.031935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.373 06:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:32.374 06:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:32.631 [2024-07-11 06:02:48.520063] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:32.631 [2024-07-11 06:02:48.520366] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.631 06:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:32.890 malloc0 00:16:32.890 06:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:33.148 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rZdvdJgnam 00:16:33.406 [2024-07-11 06:02:49.243173] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rZdvdJgnam 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rZdvdJgnam' 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76213 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76213 /var/tmp/bdevperf.sock 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76213 ']' 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:33.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.406 06:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.665 [2024-07-11 06:02:49.365947] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:33.665 [2024-07-11 06:02:49.366403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76213 ] 00:16:33.665 [2024-07-11 06:02:49.538407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.923 [2024-07-11 06:02:49.753392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.182 [2024-07-11 06:02:49.922046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:34.440 06:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.440 06:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:34.440 06:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rZdvdJgnam 00:16:34.699 [2024-07-11 06:02:50.468776] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:34.699 [2024-07-11 06:02:50.469507] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:34.699 TLSTESTn1 00:16:34.699 06:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:34.957 Running I/O for 10 seconds... 00:16:44.929 00:16:44.929 Latency(us) 00:16:44.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.929 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:44.929 Verification LBA range: start 0x0 length 0x2000 00:16:44.929 TLSTESTn1 : 10.04 2953.30 11.54 0.00 0.00 43226.58 8996.31 27405.96 00:16:44.929 =================================================================================================================== 00:16:44.929 Total : 2953.30 11.54 0.00 0.00 43226.58 8996.31 27405.96 00:16:44.929 0 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 76213 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76213 ']' 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76213 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76213 00:16:44.929 killing process with pid 76213 00:16:44.929 Received shutdown signal, test time was about 10.000000 seconds 00:16:44.929 00:16:44.929 Latency(us) 00:16:44.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.929 =================================================================================================================== 00:16:44.929 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76213' 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76213 00:16:44.929 [2024-07-11 06:03:00.781940] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:44.929 06:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76213 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.rZdvdJgnam 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rZdvdJgnam 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rZdvdJgnam 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rZdvdJgnam 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rZdvdJgnam' 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76357 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76357 /var/tmp/bdevperf.sock 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76357 ']' 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.306 06:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.306 [2024-07-11 06:03:02.000462] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:46.306 [2024-07-11 06:03:02.000971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76357 ] 00:16:46.306 [2024-07-11 06:03:02.168274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.564 [2024-07-11 06:03:02.387269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.822 [2024-07-11 06:03:02.566124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:47.080 06:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.080 06:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:47.080 06:03:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rZdvdJgnam 00:16:47.338 [2024-07-11 06:03:03.179132] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:47.338 [2024-07-11 06:03:03.179876] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:47.338 [2024-07-11 06:03:03.179905] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.rZdvdJgnam 00:16:47.338 request: 00:16:47.338 { 00:16:47.338 "name": "TLSTEST", 00:16:47.338 "trtype": "tcp", 00:16:47.338 "traddr": "10.0.0.2", 00:16:47.338 "adrfam": "ipv4", 00:16:47.338 "trsvcid": "4420", 00:16:47.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:47.338 "prchk_reftag": false, 00:16:47.338 "prchk_guard": false, 00:16:47.338 "hdgst": false, 00:16:47.338 "ddgst": false, 00:16:47.338 "psk": "/tmp/tmp.rZdvdJgnam", 00:16:47.338 "method": "bdev_nvme_attach_controller", 00:16:47.338 "req_id": 1 00:16:47.338 } 00:16:47.338 Got JSON-RPC error response 00:16:47.338 response: 00:16:47.338 { 00:16:47.338 "code": -1, 00:16:47.338 "message": "Operation not permitted" 00:16:47.338 } 00:16:47.338 06:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 76357 00:16:47.338 06:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76357 ']' 00:16:47.338 06:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76357 00:16:47.338 06:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:47.338 06:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.338 06:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76357 00:16:47.338 killing process with pid 76357 00:16:47.339 Received shutdown signal, test time was about 10.000000 seconds 00:16:47.339 00:16:47.339 Latency(us) 00:16:47.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.339 =================================================================================================================== 00:16:47.339 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:47.339 06:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:47.339 06:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:47.339 06:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76357' 00:16:47.339 06:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76357 00:16:47.339 06:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76357 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 76158 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76158 ']' 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76158 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76158 00:16:48.716 killing process with pid 76158 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76158' 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76158 00:16:48.716 [2024-07-11 06:03:04.279550] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:48.716 06:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76158 00:16:49.652 06:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:49.652 06:03:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:49.652 06:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:49.652 06:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.652 06:03:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76408 00:16:49.652 06:03:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76408 00:16:49.653 06:03:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:49.653 06:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76408 ']' 00:16:49.653 06:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.653 06:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.653 06:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.653 06:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.653 06:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.653 [2024-07-11 06:03:05.504488] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:49.653 [2024-07-11 06:03:05.504989] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.911 [2024-07-11 06:03:05.679963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.911 [2024-07-11 06:03:05.831161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.911 [2024-07-11 06:03:05.831218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.911 [2024-07-11 06:03:05.831232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.911 [2024-07-11 06:03:05.831245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.911 [2024-07-11 06:03:05.831254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.911 [2024-07-11 06:03:05.831290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.170 [2024-07-11 06:03:05.995663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.rZdvdJgnam 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rZdvdJgnam 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.rZdvdJgnam 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rZdvdJgnam 00:16:50.737 06:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:50.995 [2024-07-11 06:03:06.704162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.995 06:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:51.254 06:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:51.254 [2024-07-11 06:03:07.128272] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:51.254 [2024-07-11 06:03:07.128917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.254 06:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:51.512 malloc0 00:16:51.512 06:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:51.770 06:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rZdvdJgnam 00:16:52.028 [2024-07-11 06:03:07.844850] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:52.028 [2024-07-11 06:03:07.844906] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:52.028 [2024-07-11 06:03:07.844935] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:52.028 request: 00:16:52.028 { 00:16:52.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.028 "host": "nqn.2016-06.io.spdk:host1", 00:16:52.028 "psk": "/tmp/tmp.rZdvdJgnam", 00:16:52.028 "method": "nvmf_subsystem_add_host", 00:16:52.028 "req_id": 1 00:16:52.028 } 00:16:52.028 Got JSON-RPC error response 00:16:52.028 response: 00:16:52.028 { 00:16:52.028 "code": -32603, 00:16:52.028 "message": "Internal error" 00:16:52.028 } 00:16:52.028 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:52.028 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:52.028 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:52.028 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:52.028 06:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 76408 00:16:52.028 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76408 ']' 00:16:52.028 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76408 00:16:52.028 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:52.028 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:52.028 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76408 00:16:52.028 killing process with pid 76408 00:16:52.029 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:52.029 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:52.029 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76408' 00:16:52.029 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76408 00:16:52.029 06:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76408 00:16:53.404 06:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.rZdvdJgnam 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76483 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76483 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76483 ']' 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.404 06:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.404 [2024-07-11 06:03:09.130174] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:53.404 [2024-07-11 06:03:09.130359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.404 [2024-07-11 06:03:09.300375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.663 [2024-07-11 06:03:09.466310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.663 [2024-07-11 06:03:09.466428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.663 [2024-07-11 06:03:09.466446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.663 [2024-07-11 06:03:09.466461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.663 [2024-07-11 06:03:09.466472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.663 [2024-07-11 06:03:09.466513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.921 [2024-07-11 06:03:09.646679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:54.180 06:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.180 06:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:54.180 06:03:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.180 06:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.180 06:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.180 06:03:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.180 06:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.rZdvdJgnam 00:16:54.180 06:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rZdvdJgnam 00:16:54.180 06:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:54.439 [2024-07-11 06:03:10.294181] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.439 06:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:54.697 06:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:54.956 [2024-07-11 06:03:10.782402] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:54.956 [2024-07-11 06:03:10.782705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.956 06:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:55.215 malloc0 00:16:55.215 06:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:55.473 06:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rZdvdJgnam 00:16:55.732 [2024-07-11 06:03:11.523505] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:55.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.732 06:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=76533 00:16:55.732 06:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:55.732 06:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.732 06:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 76533 /var/tmp/bdevperf.sock 00:16:55.732 06:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76533 ']' 00:16:55.732 06:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.732 06:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.732 06:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.732 06:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.732 06:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.732 [2024-07-11 06:03:11.630459] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:55.732 [2024-07-11 06:03:11.630898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76533 ] 00:16:55.990 [2024-07-11 06:03:11.798803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.248 [2024-07-11 06:03:12.031246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.507 [2024-07-11 06:03:12.209809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:56.765 06:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.765 06:03:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:56.766 06:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rZdvdJgnam 00:16:57.024 [2024-07-11 06:03:12.713937] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:57.024 [2024-07-11 06:03:12.714118] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:57.024 TLSTESTn1 00:16:57.024 06:03:12 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:57.283 06:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:16:57.283 "subsystems": [ 00:16:57.283 { 00:16:57.283 "subsystem": "keyring", 00:16:57.283 "config": [] 00:16:57.283 }, 00:16:57.283 { 00:16:57.283 "subsystem": "iobuf", 00:16:57.283 "config": [ 00:16:57.283 { 00:16:57.283 "method": "iobuf_set_options", 00:16:57.283 "params": { 00:16:57.283 "small_pool_count": 8192, 00:16:57.283 "large_pool_count": 1024, 00:16:57.283 "small_bufsize": 8192, 00:16:57.283 "large_bufsize": 135168 00:16:57.283 } 00:16:57.283 } 00:16:57.283 ] 00:16:57.283 }, 00:16:57.283 { 00:16:57.283 "subsystem": "sock", 00:16:57.283 "config": [ 00:16:57.283 { 00:16:57.283 "method": "sock_set_default_impl", 00:16:57.283 "params": { 00:16:57.283 "impl_name": "uring" 00:16:57.283 } 00:16:57.283 }, 00:16:57.283 { 00:16:57.283 "method": "sock_impl_set_options", 00:16:57.283 "params": { 00:16:57.283 "impl_name": "ssl", 00:16:57.283 "recv_buf_size": 4096, 00:16:57.283 "send_buf_size": 4096, 00:16:57.283 "enable_recv_pipe": true, 00:16:57.283 "enable_quickack": false, 00:16:57.283 "enable_placement_id": 0, 00:16:57.283 "enable_zerocopy_send_server": true, 00:16:57.283 "enable_zerocopy_send_client": false, 00:16:57.283 "zerocopy_threshold": 0, 00:16:57.283 "tls_version": 0, 00:16:57.284 "enable_ktls": false 00:16:57.284 } 00:16:57.284 }, 00:16:57.284 { 00:16:57.284 "method": "sock_impl_set_options", 00:16:57.284 "params": { 00:16:57.284 "impl_name": "posix", 00:16:57.284 "recv_buf_size": 2097152, 00:16:57.284 "send_buf_size": 2097152, 00:16:57.284 "enable_recv_pipe": true, 00:16:57.284 "enable_quickack": false, 00:16:57.284 "enable_placement_id": 0, 00:16:57.284 "enable_zerocopy_send_server": true, 00:16:57.284 "enable_zerocopy_send_client": false, 00:16:57.284 "zerocopy_threshold": 0, 00:16:57.284 "tls_version": 0, 00:16:57.284 "enable_ktls": false 00:16:57.284 } 00:16:57.284 }, 00:16:57.284 { 00:16:57.284 "method": "sock_impl_set_options", 00:16:57.284 "params": { 00:16:57.284 "impl_name": "uring", 00:16:57.284 "recv_buf_size": 2097152, 00:16:57.284 "send_buf_size": 2097152, 00:16:57.284 "enable_recv_pipe": true, 00:16:57.284 "enable_quickack": false, 00:16:57.284 "enable_placement_id": 0, 00:16:57.284 "enable_zerocopy_send_server": false, 00:16:57.284 "enable_zerocopy_send_client": false, 00:16:57.284 "zerocopy_threshold": 0, 00:16:57.284 "tls_version": 0, 00:16:57.284 "enable_ktls": false 00:16:57.284 } 00:16:57.284 } 00:16:57.284 ] 00:16:57.284 }, 00:16:57.284 { 00:16:57.284 "subsystem": "vmd", 00:16:57.284 "config": [] 00:16:57.284 }, 00:16:57.284 { 00:16:57.284 "subsystem": "accel", 00:16:57.284 "config": [ 00:16:57.284 { 00:16:57.284 "method": "accel_set_options", 00:16:57.284 "params": { 00:16:57.284 "small_cache_size": 128, 00:16:57.284 "large_cache_size": 16, 00:16:57.284 "task_count": 2048, 00:16:57.284 "sequence_count": 2048, 00:16:57.284 "buf_count": 2048 00:16:57.284 } 00:16:57.284 } 00:16:57.284 ] 00:16:57.284 }, 00:16:57.284 { 00:16:57.284 "subsystem": "bdev", 00:16:57.284 "config": [ 00:16:57.284 { 00:16:57.284 "method": "bdev_set_options", 00:16:57.284 "params": { 00:16:57.284 "bdev_io_pool_size": 65535, 00:16:57.284 "bdev_io_cache_size": 256, 00:16:57.284 "bdev_auto_examine": true, 00:16:57.284 "iobuf_small_cache_size": 128, 00:16:57.284 "iobuf_large_cache_size": 16 00:16:57.284 } 00:16:57.284 }, 00:16:57.284 { 00:16:57.284 "method": "bdev_raid_set_options", 00:16:57.284 "params": { 00:16:57.284 "process_window_size_kb": 1024 00:16:57.284 } 00:16:57.284 }, 00:16:57.284 { 00:16:57.284 "method": "bdev_iscsi_set_options", 00:16:57.284 "params": { 00:16:57.284 "timeout_sec": 30 00:16:57.284 } 00:16:57.284 }, 00:16:57.284 { 00:16:57.284 "method": "bdev_nvme_set_options", 00:16:57.284 "params": { 00:16:57.284 "action_on_timeout": "none", 00:16:57.284 "timeout_us": 0, 00:16:57.284 "timeout_admin_us": 0, 00:16:57.284 "keep_alive_timeout_ms": 10000, 00:16:57.284 "arbitration_burst": 0, 00:16:57.284 "low_priority_weight": 0, 00:16:57.284 "medium_priority_weight": 0, 00:16:57.284 "high_priority_weight": 0, 00:16:57.284 "nvme_adminq_poll_period_us": 10000, 00:16:57.284 "nvme_ioq_poll_period_us": 0, 00:16:57.284 "io_queue_requests": 0, 00:16:57.284 "delay_cmd_submit": true, 00:16:57.284 "transport_retry_count": 4, 00:16:57.284 "bdev_retry_count": 3, 00:16:57.284 "transport_ack_timeout": 0, 00:16:57.284 "ctrlr_loss_timeout_sec": 0, 00:16:57.284 "reconnect_delay_sec": 0, 00:16:57.284 "fast_io_fail_timeout_sec": 0, 00:16:57.284 "disable_auto_failback": false, 00:16:57.284 "generate_uuids": false, 00:16:57.284 "transport_tos": 0, 00:16:57.284 "nvme_error_stat": false, 00:16:57.284 "rdma_srq_size": 0, 00:16:57.284 "io_path_stat": false, 00:16:57.284 "allow_accel_sequence": false, 00:16:57.284 "rdma_max_cq_size": 0, 00:16:57.284 "rdma_cm_event_timeout_ms": 0, 00:16:57.284 "dhchap_digests": [ 00:16:57.284 "sha256", 00:16:57.284 "sha384", 00:16:57.284 "sha512" 00:16:57.284 ], 00:16:57.284 "dhchap_dhgroups": [ 00:16:57.284 "null", 00:16:57.284 "ffdhe2048", 00:16:57.284 "ffdhe3072", 00:16:57.284 "ffdhe4096", 00:16:57.284 "ffdhe6144", 00:16:57.284 "ffdhe8192" 00:16:57.284 ] 00:16:57.284 } 00:16:57.284 }, 00:16:57.284 { 00:16:57.284 "method": "bdev_nvme_set_hotplug", 00:16:57.284 "params": { 00:16:57.284 "period_us": 100000, 00:16:57.284 "enable": false 00:16:57.284 } 00:16:57.284 }, 00:16:57.284 { 00:16:57.284 "method": "bdev_malloc_create", 00:16:57.284 "params": { 00:16:57.284 "name": "malloc0", 00:16:57.284 "num_blocks": 8192, 00:16:57.284 "block_size": 4096, 00:16:57.285 "physical_block_size": 4096, 00:16:57.285 "uuid": "445a55f1-5846-4e6b-822c-1b0ba95b9126", 00:16:57.285 "optimal_io_boundary": 0 00:16:57.285 } 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "method": "bdev_wait_for_examine" 00:16:57.285 } 00:16:57.285 ] 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "subsystem": "nbd", 00:16:57.285 "config": [] 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "subsystem": "scheduler", 00:16:57.285 "config": [ 00:16:57.285 { 00:16:57.285 "method": "framework_set_scheduler", 00:16:57.285 "params": { 00:16:57.285 "name": "static" 00:16:57.285 } 00:16:57.285 } 00:16:57.285 ] 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "subsystem": "nvmf", 00:16:57.285 "config": [ 00:16:57.285 { 00:16:57.285 "method": "nvmf_set_config", 00:16:57.285 "params": { 00:16:57.285 "discovery_filter": "match_any", 00:16:57.285 "admin_cmd_passthru": { 00:16:57.285 "identify_ctrlr": false 00:16:57.285 } 00:16:57.285 } 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "method": "nvmf_set_max_subsystems", 00:16:57.285 "params": { 00:16:57.285 "max_subsystems": 1024 00:16:57.285 } 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "method": "nvmf_set_crdt", 00:16:57.285 "params": { 00:16:57.285 "crdt1": 0, 00:16:57.285 "crdt2": 0, 00:16:57.285 "crdt3": 0 00:16:57.285 } 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "method": "nvmf_create_transport", 00:16:57.285 "params": { 00:16:57.285 "trtype": "TCP", 00:16:57.285 "max_queue_depth": 128, 00:16:57.285 "max_io_qpairs_per_ctrlr": 127, 00:16:57.285 "in_capsule_data_size": 4096, 00:16:57.285 "max_io_size": 131072, 00:16:57.285 "io_unit_size": 131072, 00:16:57.285 "max_aq_depth": 128, 00:16:57.285 "num_shared_buffers": 511, 00:16:57.285 "buf_cache_size": 4294967295, 00:16:57.285 "dif_insert_or_strip": false, 00:16:57.285 "zcopy": false, 00:16:57.285 "c2h_success": false, 00:16:57.285 "sock_priority": 0, 00:16:57.285 "abort_timeout_sec": 1, 00:16:57.285 "ack_timeout": 0, 00:16:57.285 "data_wr_pool_size": 0 00:16:57.285 } 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "method": "nvmf_create_subsystem", 00:16:57.285 "params": { 00:16:57.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.285 "allow_any_host": false, 00:16:57.285 "serial_number": "SPDK00000000000001", 00:16:57.285 "model_number": "SPDK bdev Controller", 00:16:57.285 "max_namespaces": 10, 00:16:57.285 "min_cntlid": 1, 00:16:57.285 "max_cntlid": 65519, 00:16:57.285 "ana_reporting": false 00:16:57.285 } 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "method": "nvmf_subsystem_add_host", 00:16:57.285 "params": { 00:16:57.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.285 "host": "nqn.2016-06.io.spdk:host1", 00:16:57.285 "psk": "/tmp/tmp.rZdvdJgnam" 00:16:57.285 } 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "method": "nvmf_subsystem_add_ns", 00:16:57.285 "params": { 00:16:57.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.285 "namespace": { 00:16:57.285 "nsid": 1, 00:16:57.285 "bdev_name": "malloc0", 00:16:57.285 "nguid": "445A55F158464E6B822C1B0BA95B9126", 00:16:57.285 "uuid": "445a55f1-5846-4e6b-822c-1b0ba95b9126", 00:16:57.285 "no_auto_visible": false 00:16:57.285 } 00:16:57.285 } 00:16:57.285 }, 00:16:57.285 { 00:16:57.285 "method": "nvmf_subsystem_add_listener", 00:16:57.285 "params": { 00:16:57.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.285 "listen_address": { 00:16:57.285 "trtype": "TCP", 00:16:57.285 "adrfam": "IPv4", 00:16:57.285 "traddr": "10.0.0.2", 00:16:57.285 "trsvcid": "4420" 00:16:57.285 }, 00:16:57.285 "secure_channel": true 00:16:57.285 } 00:16:57.285 } 00:16:57.285 ] 00:16:57.285 } 00:16:57.285 ] 00:16:57.285 }' 00:16:57.285 06:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:57.863 06:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:57.863 "subsystems": [ 00:16:57.863 { 00:16:57.863 "subsystem": "keyring", 00:16:57.863 "config": [] 00:16:57.863 }, 00:16:57.863 { 00:16:57.863 "subsystem": "iobuf", 00:16:57.863 "config": [ 00:16:57.863 { 00:16:57.863 "method": "iobuf_set_options", 00:16:57.863 "params": { 00:16:57.863 "small_pool_count": 8192, 00:16:57.863 "large_pool_count": 1024, 00:16:57.863 "small_bufsize": 8192, 00:16:57.863 "large_bufsize": 135168 00:16:57.863 } 00:16:57.863 } 00:16:57.863 ] 00:16:57.863 }, 00:16:57.863 { 00:16:57.863 "subsystem": "sock", 00:16:57.863 "config": [ 00:16:57.863 { 00:16:57.863 "method": "sock_set_default_impl", 00:16:57.863 "params": { 00:16:57.863 "impl_name": "uring" 00:16:57.863 } 00:16:57.863 }, 00:16:57.863 { 00:16:57.863 "method": "sock_impl_set_options", 00:16:57.863 "params": { 00:16:57.863 "impl_name": "ssl", 00:16:57.863 "recv_buf_size": 4096, 00:16:57.863 "send_buf_size": 4096, 00:16:57.863 "enable_recv_pipe": true, 00:16:57.863 "enable_quickack": false, 00:16:57.863 "enable_placement_id": 0, 00:16:57.863 "enable_zerocopy_send_server": true, 00:16:57.863 "enable_zerocopy_send_client": false, 00:16:57.863 "zerocopy_threshold": 0, 00:16:57.863 "tls_version": 0, 00:16:57.863 "enable_ktls": false 00:16:57.863 } 00:16:57.863 }, 00:16:57.863 { 00:16:57.863 "method": "sock_impl_set_options", 00:16:57.863 "params": { 00:16:57.863 "impl_name": "posix", 00:16:57.863 "recv_buf_size": 2097152, 00:16:57.863 "send_buf_size": 2097152, 00:16:57.864 "enable_recv_pipe": true, 00:16:57.864 "enable_quickack": false, 00:16:57.864 "enable_placement_id": 0, 00:16:57.864 "enable_zerocopy_send_server": true, 00:16:57.864 "enable_zerocopy_send_client": false, 00:16:57.864 "zerocopy_threshold": 0, 00:16:57.864 "tls_version": 0, 00:16:57.864 "enable_ktls": false 00:16:57.864 } 00:16:57.864 }, 00:16:57.864 { 00:16:57.864 "method": "sock_impl_set_options", 00:16:57.864 "params": { 00:16:57.864 "impl_name": "uring", 00:16:57.864 "recv_buf_size": 2097152, 00:16:57.864 "send_buf_size": 2097152, 00:16:57.864 "enable_recv_pipe": true, 00:16:57.864 "enable_quickack": false, 00:16:57.864 "enable_placement_id": 0, 00:16:57.864 "enable_zerocopy_send_server": false, 00:16:57.864 "enable_zerocopy_send_client": false, 00:16:57.864 "zerocopy_threshold": 0, 00:16:57.864 "tls_version": 0, 00:16:57.864 "enable_ktls": false 00:16:57.864 } 00:16:57.864 } 00:16:57.864 ] 00:16:57.864 }, 00:16:57.864 { 00:16:57.864 "subsystem": "vmd", 00:16:57.864 "config": [] 00:16:57.864 }, 00:16:57.864 { 00:16:57.864 "subsystem": "accel", 00:16:57.864 "config": [ 00:16:57.864 { 00:16:57.864 "method": "accel_set_options", 00:16:57.864 "params": { 00:16:57.864 "small_cache_size": 128, 00:16:57.864 "large_cache_size": 16, 00:16:57.864 "task_count": 2048, 00:16:57.864 "sequence_count": 2048, 00:16:57.864 "buf_count": 2048 00:16:57.864 } 00:16:57.864 } 00:16:57.864 ] 00:16:57.864 }, 00:16:57.864 { 00:16:57.864 "subsystem": "bdev", 00:16:57.864 "config": [ 00:16:57.864 { 00:16:57.864 "method": "bdev_set_options", 00:16:57.864 "params": { 00:16:57.864 "bdev_io_pool_size": 65535, 00:16:57.864 "bdev_io_cache_size": 256, 00:16:57.864 "bdev_auto_examine": true, 00:16:57.864 "iobuf_small_cache_size": 128, 00:16:57.864 "iobuf_large_cache_size": 16 00:16:57.864 } 00:16:57.864 }, 00:16:57.864 { 00:16:57.864 "method": "bdev_raid_set_options", 00:16:57.864 "params": { 00:16:57.864 "process_window_size_kb": 1024 00:16:57.864 } 00:16:57.864 }, 00:16:57.864 { 00:16:57.864 "method": "bdev_iscsi_set_options", 00:16:57.864 "params": { 00:16:57.864 "timeout_sec": 30 00:16:57.864 } 00:16:57.864 }, 00:16:57.864 { 00:16:57.864 "method": "bdev_nvme_set_options", 00:16:57.864 "params": { 00:16:57.864 "action_on_timeout": "none", 00:16:57.864 "timeout_us": 0, 00:16:57.864 "timeout_admin_us": 0, 00:16:57.864 "keep_alive_timeout_ms": 10000, 00:16:57.864 "arbitration_burst": 0, 00:16:57.864 "low_priority_weight": 0, 00:16:57.864 "medium_priority_weight": 0, 00:16:57.864 "high_priority_weight": 0, 00:16:57.864 "nvme_adminq_poll_period_us": 10000, 00:16:57.864 "nvme_ioq_poll_period_us": 0, 00:16:57.864 "io_queue_requests": 512, 00:16:57.864 "delay_cmd_submit": true, 00:16:57.864 "transport_retry_count": 4, 00:16:57.864 "bdev_retry_count": 3, 00:16:57.864 "transport_ack_timeout": 0, 00:16:57.864 "ctrlr_loss_timeout_sec": 0, 00:16:57.864 "reconnect_delay_sec": 0, 00:16:57.865 "fast_io_fail_timeout_sec": 0, 00:16:57.865 "disable_auto_failback": false, 00:16:57.865 "generate_uuids": false, 00:16:57.865 "transport_tos": 0, 00:16:57.865 "nvme_error_stat": false, 00:16:57.865 "rdma_srq_size": 0, 00:16:57.865 "io_path_stat": false, 00:16:57.865 "allow_accel_sequence": false, 00:16:57.865 "rdma_max_cq_size": 0, 00:16:57.865 "rdma_cm_event_timeout_ms": 0, 00:16:57.865 "dhchap_digests": [ 00:16:57.865 "sha256", 00:16:57.865 "sha384", 00:16:57.865 "sha512" 00:16:57.865 ], 00:16:57.865 "dhchap_dhgroups": [ 00:16:57.865 "null", 00:16:57.865 "ffdhe2048", 00:16:57.865 "ffdhe3072", 00:16:57.865 "ffdhe4096", 00:16:57.865 "ffdhe6144", 00:16:57.865 "ffdhe8192" 00:16:57.865 ] 00:16:57.865 } 00:16:57.865 }, 00:16:57.865 { 00:16:57.865 "method": "bdev_nvme_attach_controller", 00:16:57.865 "params": { 00:16:57.865 "name": "TLSTEST", 00:16:57.865 "trtype": "TCP", 00:16:57.865 "adrfam": "IPv4", 00:16:57.865 "traddr": "10.0.0.2", 00:16:57.865 "trsvcid": "4420", 00:16:57.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.865 "prchk_reftag": false, 00:16:57.865 "prchk_guard": false, 00:16:57.865 "ctrlr_loss_timeout_sec": 0, 00:16:57.865 "reconnect_delay_sec": 0, 00:16:57.865 "fast_io_fail_timeout_sec": 0, 00:16:57.865 "psk": "/tmp/tmp.rZdvdJgnam", 00:16:57.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.865 "hdgst": false, 00:16:57.865 "ddgst": false 00:16:57.865 } 00:16:57.865 }, 00:16:57.865 { 00:16:57.865 "method": "bdev_nvme_set_hotplug", 00:16:57.865 "params": { 00:16:57.865 "period_us": 100000, 00:16:57.865 "enable": false 00:16:57.865 } 00:16:57.865 }, 00:16:57.865 { 00:16:57.866 "method": "bdev_wait_for_examine" 00:16:57.866 } 00:16:57.866 ] 00:16:57.866 }, 00:16:57.866 { 00:16:57.866 "subsystem": "nbd", 00:16:57.866 "config": [] 00:16:57.866 } 00:16:57.866 ] 00:16:57.866 }' 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 76533 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76533 ']' 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76533 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76533 00:16:57.866 killing process with pid 76533 00:16:57.866 Received shutdown signal, test time was about 10.000000 seconds 00:16:57.866 00:16:57.866 Latency(us) 00:16:57.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.866 =================================================================================================================== 00:16:57.866 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76533' 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76533 00:16:57.866 [2024-07-11 06:03:13.487111] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:57.866 06:03:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76533 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 76483 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76483 ']' 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76483 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76483 00:16:58.834 killing process with pid 76483 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76483' 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76483 00:16:58.834 [2024-07-11 06:03:14.503400] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:58.834 06:03:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76483 00:16:59.770 06:03:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:59.770 06:03:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:59.770 06:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:59.770 06:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.770 06:03:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:59.770 "subsystems": [ 00:16:59.770 { 00:16:59.770 "subsystem": "keyring", 00:16:59.770 "config": [] 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "subsystem": "iobuf", 00:16:59.770 "config": [ 00:16:59.770 { 00:16:59.770 "method": "iobuf_set_options", 00:16:59.770 "params": { 00:16:59.770 "small_pool_count": 8192, 00:16:59.770 "large_pool_count": 1024, 00:16:59.770 "small_bufsize": 8192, 00:16:59.770 "large_bufsize": 135168 00:16:59.770 } 00:16:59.770 } 00:16:59.770 ] 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "subsystem": "sock", 00:16:59.770 "config": [ 00:16:59.770 { 00:16:59.770 "method": "sock_set_default_impl", 00:16:59.770 "params": { 00:16:59.770 "impl_name": "uring" 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "sock_impl_set_options", 00:16:59.770 "params": { 00:16:59.770 "impl_name": "ssl", 00:16:59.770 "recv_buf_size": 4096, 00:16:59.770 "send_buf_size": 4096, 00:16:59.770 "enable_recv_pipe": true, 00:16:59.770 "enable_quickack": false, 00:16:59.770 "enable_placement_id": 0, 00:16:59.770 "enable_zerocopy_send_server": true, 00:16:59.770 "enable_zerocopy_send_client": false, 00:16:59.770 "zerocopy_threshold": 0, 00:16:59.770 "tls_version": 0, 00:16:59.770 "enable_ktls": false 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "sock_impl_set_options", 00:16:59.770 "params": { 00:16:59.770 "impl_name": "posix", 00:16:59.770 "recv_buf_size": 2097152, 00:16:59.770 "send_buf_size": 2097152, 00:16:59.770 "enable_recv_pipe": true, 00:16:59.770 "enable_quickack": false, 00:16:59.770 "enable_placement_id": 0, 00:16:59.770 "enable_zerocopy_send_server": true, 00:16:59.770 "enable_zerocopy_send_client": false, 00:16:59.770 "zerocopy_threshold": 0, 00:16:59.770 "tls_version": 0, 00:16:59.770 "enable_ktls": false 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "sock_impl_set_options", 00:16:59.770 "params": { 00:16:59.770 "impl_name": "uring", 00:16:59.770 "recv_buf_size": 2097152, 00:16:59.770 "send_buf_size": 2097152, 00:16:59.770 "enable_recv_pipe": true, 00:16:59.770 "enable_quickack": false, 00:16:59.770 "enable_placement_id": 0, 00:16:59.770 "enable_zerocopy_send_server": false, 00:16:59.770 "enable_zerocopy_send_client": false, 00:16:59.770 "zerocopy_threshold": 0, 00:16:59.770 "tls_version": 0, 00:16:59.770 "enable_ktls": false 00:16:59.770 } 00:16:59.770 } 00:16:59.770 ] 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "subsystem": "vmd", 00:16:59.770 "config": [] 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "subsystem": "accel", 00:16:59.770 "config": [ 00:16:59.770 { 00:16:59.770 "method": "accel_set_options", 00:16:59.770 "params": { 00:16:59.770 "small_cache_size": 128, 00:16:59.770 "large_cache_size": 16, 00:16:59.770 "task_count": 2048, 00:16:59.770 "sequence_count": 2048, 00:16:59.770 "buf_count": 2048 00:16:59.770 } 00:16:59.770 } 00:16:59.770 ] 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "subsystem": "bdev", 00:16:59.770 "config": [ 00:16:59.770 { 00:16:59.770 "method": "bdev_set_options", 00:16:59.770 "params": { 00:16:59.770 "bdev_io_pool_size": 65535, 00:16:59.770 "bdev_io_cache_size": 256, 00:16:59.770 "bdev_auto_examine": true, 00:16:59.770 "iobuf_small_cache_size": 128, 00:16:59.770 "iobuf_large_cache_size": 16 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "bdev_raid_set_options", 00:16:59.770 "params": { 00:16:59.770 "process_window_size_kb": 1024 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "bdev_iscsi_set_options", 00:16:59.770 "params": { 00:16:59.770 "timeout_sec": 30 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "bdev_nvme_set_options", 00:16:59.770 "params": { 00:16:59.770 "action_on_timeout": "none", 00:16:59.770 "timeout_us": 0, 00:16:59.770 "timeout_admin_us": 0, 00:16:59.770 "keep_alive_timeout_ms": 10000, 00:16:59.770 "arbitration_burst": 0, 00:16:59.770 "low_priority_weight": 0, 00:16:59.770 "medium_priority_weight": 0, 00:16:59.770 "high_priority_weight": 0, 00:16:59.770 "nvme_adminq_poll_period_us": 10000, 00:16:59.770 "nvme_ioq_poll_period_us": 0, 00:16:59.770 "io_queue_requests": 0, 00:16:59.770 "delay_cmd_submit": true, 00:16:59.770 "transport_retry_count": 4, 00:16:59.770 "bdev_retry_count": 3, 00:16:59.770 "transport_ack_timeout": 0, 00:16:59.770 "ctrlr_loss_timeout_sec": 0, 00:16:59.770 "reconnect_delay_sec": 0, 00:16:59.770 "fast_io_fail_timeout_sec": 0, 00:16:59.770 "disable_auto_failback": false, 00:16:59.770 "generate_uuids": false, 00:16:59.770 "transport_tos": 0, 00:16:59.770 "nvme_error_stat": false, 00:16:59.770 "rdma_srq_size": 0, 00:16:59.770 "io_path_stat": false, 00:16:59.770 "allow_accel_sequence": false, 00:16:59.770 "rdma_max_cq_size": 0, 00:16:59.770 "rdma_cm_event_timeout_ms": 0, 00:16:59.770 "dhchap_digests": [ 00:16:59.770 "sha256", 00:16:59.770 "sha384", 00:16:59.770 "sha512" 00:16:59.770 ], 00:16:59.770 "dhchap_dhgroups": [ 00:16:59.770 "null", 00:16:59.770 "ffdhe2048", 00:16:59.770 "ffdhe3072", 00:16:59.770 "ffdhe4096", 00:16:59.770 "ffdhe6144", 00:16:59.770 "ffdhe8192" 00:16:59.770 ] 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "bdev_nvme_set_hotplug", 00:16:59.770 "params": { 00:16:59.770 "period_us": 100000, 00:16:59.770 "enable": false 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "bdev_malloc_create", 00:16:59.770 "params": { 00:16:59.770 "name": "malloc0", 00:16:59.770 "num_blocks": 8192, 00:16:59.770 "block_size": 4096, 00:16:59.770 "physical_block_size": 4096, 00:16:59.770 "uuid": "445a55f1-5846-4e6b-822c-1b0ba95b9126", 00:16:59.770 "optimal_io_boundary": 0 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "bdev_wait_for_examine" 00:16:59.770 } 00:16:59.770 ] 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "subsystem": "nbd", 00:16:59.770 "config": [] 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "subsystem": "scheduler", 00:16:59.770 "config": [ 00:16:59.770 { 00:16:59.770 "method": "framework_set_scheduler", 00:16:59.770 "params": { 00:16:59.770 "name": "static" 00:16:59.770 } 00:16:59.770 } 00:16:59.770 ] 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "subsystem": "nvmf", 00:16:59.770 "config": [ 00:16:59.770 { 00:16:59.770 "method": "nvmf_set_config", 00:16:59.770 "params": { 00:16:59.770 "discovery_filter": "match_any", 00:16:59.770 "admin_cmd_passthru": { 00:16:59.770 "identify_ctrlr": false 00:16:59.770 } 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "nvmf_set_max_subsystems", 00:16:59.770 "params": { 00:16:59.770 "max_subsystems": 1024 00:16:59.770 } 00:16:59.770 }, 00:16:59.770 { 00:16:59.770 "method": "nvmf_set_crdt", 00:16:59.770 "params": { 00:16:59.771 "crdt1": 0, 00:16:59.771 "crdt2": 0, 00:16:59.771 "crdt3": 0 00:16:59.771 } 00:16:59.771 }, 00:16:59.771 { 00:16:59.771 "method": "nvmf_create_transport", 00:16:59.771 "params": { 00:16:59.771 "trtype": "TCP", 00:16:59.771 "max_queue_depth": 128, 00:16:59.771 "max_io_qpairs_per_ctrlr": 127, 00:16:59.771 "in_capsule_data_size": 4096, 00:16:59.771 "max_io_size": 131072, 00:16:59.771 "io_unit_size": 131072, 00:16:59.771 "max_aq_depth": 128, 00:16:59.771 "num_shared_buffers": 511, 00:16:59.771 "buf_cache_size": 4294967295, 00:16:59.771 "dif_insert_or_strip": false, 00:16:59.771 "zcopy": false, 00:16:59.771 "c2h_success": false, 00:16:59.771 "sock_priority": 0, 00:16:59.771 "abort_timeout_sec": 1, 00:16:59.771 "ack_timeout": 0, 00:16:59.771 "data_wr_pool_size": 0 00:16:59.771 } 00:16:59.771 }, 00:16:59.771 { 00:16:59.771 "method": "nvmf_create_subsystem", 00:16:59.771 "params": { 00:16:59.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.771 "allow_any_host": false, 00:16:59.771 "serial_number": "SPDK00000000000001", 00:16:59.771 "model_number": "SPDK bdev Controller", 00:16:59.771 "max_namespaces": 10, 00:16:59.771 "min_cntlid": 1, 00:16:59.771 "max_cntlid": 65519, 00:16:59.771 "ana_reporting": false 00:16:59.771 } 00:16:59.771 }, 00:16:59.771 { 00:16:59.771 "method": "nvmf_subsystem_add_host", 00:16:59.771 "params": { 00:16:59.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.771 "host": "nqn.2016-06.io.spdk:host1", 00:16:59.771 "psk": "/tmp/tmp.rZdvdJgnam" 00:16:59.771 } 00:16:59.771 }, 00:16:59.771 { 00:16:59.771 "method": "nvmf_subsystem_add_ns", 00:16:59.771 "params": { 00:16:59.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.771 "namespace": { 00:16:59.771 "nsid": 1, 00:16:59.771 "bdev_name": "malloc0", 00:16:59.771 "nguid": "445A55F158464E6B822C1B0BA95B9126", 00:16:59.771 "uuid": "445a55f1-5846-4e6b-822c-1b0ba95b9126", 00:16:59.771 "no_auto_visible": false 00:16:59.771 } 00:16:59.771 } 00:16:59.771 }, 00:16:59.771 { 00:16:59.771 "method": "nvmf_subsystem_add_listener", 00:16:59.771 "params": { 00:16:59.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.771 "listen_address": { 00:16:59.771 "trtype": "TCP", 00:16:59.771 "adrfam": "IPv4", 00:16:59.771 "traddr": "10.0.0.2", 00:16:59.771 "trsvcid": "4420" 00:16:59.771 }, 00:16:59.771 "secure_channel": true 00:16:59.771 } 00:16:59.771 } 00:16:59.771 ] 00:16:59.771 } 00:16:59.771 ] 00:16:59.771 }' 00:16:59.771 06:03:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76595 00:16:59.771 06:03:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76595 00:16:59.771 06:03:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:59.771 06:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76595 ']' 00:16:59.771 06:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.771 06:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.771 06:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.771 06:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.771 06:03:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.771 [2024-07-11 06:03:15.632892] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:16:59.771 [2024-07-11 06:03:15.633053] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.029 [2024-07-11 06:03:15.795465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.029 [2024-07-11 06:03:15.947232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.029 [2024-07-11 06:03:15.947313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.029 [2024-07-11 06:03:15.947328] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.029 [2024-07-11 06:03:15.947340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.029 [2024-07-11 06:03:15.947350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.029 [2024-07-11 06:03:15.947491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.288 [2024-07-11 06:03:16.206223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:00.546 [2024-07-11 06:03:16.335863] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.546 [2024-07-11 06:03:16.351804] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:00.546 [2024-07-11 06:03:16.367812] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:00.546 [2024-07-11 06:03:16.376832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.546 06:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.546 06:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:00.546 06:03:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:00.546 06:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:00.546 06:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=76627 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 76627 /var/tmp/bdevperf.sock 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76627 ']' 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:00.805 06:03:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:00.805 "subsystems": [ 00:17:00.805 { 00:17:00.805 "subsystem": "keyring", 00:17:00.805 "config": [] 00:17:00.805 }, 00:17:00.805 { 00:17:00.805 "subsystem": "iobuf", 00:17:00.805 "config": [ 00:17:00.805 { 00:17:00.805 "method": "iobuf_set_options", 00:17:00.805 "params": { 00:17:00.805 "small_pool_count": 8192, 00:17:00.805 "large_pool_count": 1024, 00:17:00.805 "small_bufsize": 8192, 00:17:00.805 "large_bufsize": 135168 00:17:00.805 } 00:17:00.805 } 00:17:00.805 ] 00:17:00.805 }, 00:17:00.805 { 00:17:00.805 "subsystem": "sock", 00:17:00.805 "config": [ 00:17:00.805 { 00:17:00.805 "method": "sock_set_default_impl", 00:17:00.805 "params": { 00:17:00.805 "impl_name": "uring" 00:17:00.805 } 00:17:00.805 }, 00:17:00.805 { 00:17:00.805 "method": "sock_impl_set_options", 00:17:00.805 "params": { 00:17:00.805 "impl_name": "ssl", 00:17:00.805 "recv_buf_size": 4096, 00:17:00.805 "send_buf_size": 4096, 00:17:00.805 "enable_recv_pipe": true, 00:17:00.805 "enable_quickack": false, 00:17:00.805 "enable_placement_id": 0, 00:17:00.805 "enable_zerocopy_send_server": true, 00:17:00.805 "enable_zerocopy_send_client": false, 00:17:00.805 "zerocopy_threshold": 0, 00:17:00.805 "tls_version": 0, 00:17:00.805 "enable_ktls": false 00:17:00.805 } 00:17:00.805 }, 00:17:00.805 { 00:17:00.805 "method": "sock_impl_set_options", 00:17:00.805 "params": { 00:17:00.805 "impl_name": "posix", 00:17:00.805 "recv_buf_size": 2097152, 00:17:00.805 "send_buf_size": 2097152, 00:17:00.805 "enable_recv_pipe": true, 00:17:00.805 "enable_quickack": false, 00:17:00.805 "enable_placement_id": 0, 00:17:00.805 "enable_zerocopy_send_server": true, 00:17:00.805 "enable_zerocopy_send_client": false, 00:17:00.805 "zerocopy_threshold": 0, 00:17:00.805 "tls_version": 0, 00:17:00.805 "enable_ktls": false 00:17:00.805 } 00:17:00.805 }, 00:17:00.805 { 00:17:00.805 "method": "sock_impl_set_options", 00:17:00.805 "params": { 00:17:00.805 "impl_name": "uring", 00:17:00.805 "recv_buf_size": 2097152, 00:17:00.805 "send_buf_size": 2097152, 00:17:00.805 "enable_recv_pipe": true, 00:17:00.805 "enable_quickack": false, 00:17:00.805 "enable_placement_id": 0, 00:17:00.805 "enable_zerocopy_send_server": false, 00:17:00.805 "enable_zerocopy_send_client": false, 00:17:00.806 "zerocopy_threshold": 0, 00:17:00.806 "tls_version": 0, 00:17:00.806 "enable_ktls": false 00:17:00.806 } 00:17:00.806 } 00:17:00.806 ] 00:17:00.806 }, 00:17:00.806 { 00:17:00.806 "subsystem": "vmd", 00:17:00.806 "config": [] 00:17:00.806 }, 00:17:00.806 { 00:17:00.806 "subsystem": "accel", 00:17:00.806 "config": [ 00:17:00.806 { 00:17:00.806 "method": "accel_set_options", 00:17:00.806 "params": { 00:17:00.806 "small_cache_size": 128, 00:17:00.806 "large_cache_size": 16, 00:17:00.806 "task_count": 2048, 00:17:00.806 "sequence_count": 2048, 00:17:00.806 "buf_count": 2048 00:17:00.806 } 00:17:00.806 } 00:17:00.806 ] 00:17:00.806 }, 00:17:00.806 { 00:17:00.806 "subsystem": "bdev", 00:17:00.806 "config": [ 00:17:00.806 { 00:17:00.806 "method": "bdev_set_options", 00:17:00.806 "params": { 00:17:00.806 "bdev_io_pool_size": 65535, 00:17:00.806 "bdev_io_cache_size": 256, 00:17:00.806 "bdev_auto_examine": true, 00:17:00.806 "iobuf_small_cache_size": 128, 00:17:00.806 "iobuf_large_cache_size": 16 00:17:00.806 } 00:17:00.806 }, 00:17:00.806 { 00:17:00.806 "method": "bdev_raid_set_options", 00:17:00.806 "params": { 00:17:00.806 "process_window_size_kb": 1024 00:17:00.806 } 00:17:00.806 }, 00:17:00.806 { 00:17:00.806 "method": "bdev_iscsi_set_options", 00:17:00.806 "params": { 00:17:00.806 "timeout_sec": 30 00:17:00.806 } 00:17:00.806 }, 00:17:00.806 { 00:17:00.806 "method": "bdev_nvme_set_options", 00:17:00.806 "params": { 00:17:00.806 "action_on_timeout": "none", 00:17:00.806 "timeout_us": 0, 00:17:00.806 "timeout_admin_us": 0, 00:17:00.806 "keep_alive_timeout_ms": 10000, 00:17:00.806 "arbitration_burst": 0, 00:17:00.806 "low_priority_weight": 0, 00:17:00.806 "medium_priority_weight": 0, 00:17:00.806 "high_priority_weight": 0, 00:17:00.806 "nvme_adminq_poll_period_us": 10000, 00:17:00.806 "nvme_ioq_poll_period_us": 0, 00:17:00.806 "io_queue_requests": 512, 00:17:00.806 "delay_cmd_submit": true, 00:17:00.806 "transport_retry_count": 4, 00:17:00.806 "bdev_retry_count": 3, 00:17:00.806 "transport_ack_timeout": 0, 00:17:00.806 "ctrlr_loss_timeout_sec": 0, 00:17:00.806 "reconnect_delay_sec": 0, 00:17:00.806 "fast_io_fail_timeout_sec": 0, 00:17:00.806 "disable_auto_failback": false, 00:17:00.806 "generate_uuids": false, 00:17:00.806 "transport_tos": 0, 00:17:00.806 "nvme_error_stat": false, 00:17:00.806 "rdma_srq_size": 0, 00:17:00.806 "io_path_stat": false, 00:17:00.806 "allow_accel_sequence": false, 00:17:00.806 "rdma_max_cq_size": 0, 00:17:00.806 "rdma_cm_event_timeout_ms": 0, 00:17:00.806 "dhchap_digests": [ 00:17:00.806 "sha256", 00:17:00.806 "sha384", 00:17:00.806 "sha512" 00:17:00.806 ], 00:17:00.806 "dhchap_dhgroups": [ 00:17:00.806 "null", 00:17:00.806 "ffdhe2048", 00:17:00.806 "ffdhe3072", 00:17:00.806 "ffdhe4096", 00:17:00.806 "ffdhe6144", 00:17:00.806 "ffdhe8192" 00:17:00.806 ] 00:17:00.806 } 00:17:00.806 }, 00:17:00.806 { 00:17:00.806 "method": "bdev_nvme_attach_controller", 00:17:00.806 "params": { 00:17:00.806 "name": "TLSTEST", 00:17:00.806 "trtype": "TCP", 00:17:00.806 "adrfam": "IPv4", 00:17:00.806 "traddr": "10.0.0.2", 00:17:00.806 "trsvcid": "4420", 00:17:00.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.806 "prchk_reftag": false, 00:17:00.806 "prchk_guard": false, 00:17:00.806 "ctrlr_loss_timeout_sec": 0, 00:17:00.806 "reconnect_delay_sec": 0, 00:17:00.806 "fast_io_fail_timeout_sec": 0, 00:17:00.806 "psk": "/tmp/tmp.rZdvdJgnam", 00:17:00.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:00.806 "hdgst": false, 00:17:00.806 "ddgst": false 00:17:00.806 } 00:17:00.806 }, 00:17:00.806 { 00:17:00.806 "method": "bdev_nvme_set_hotplug", 00:17:00.806 "params": { 00:17:00.806 "period_us": 100000, 00:17:00.806 "enable": false 00:17:00.806 } 00:17:00.806 }, 00:17:00.806 { 00:17:00.806 "method": "bdev_wait_for_examine" 00:17:00.806 } 00:17:00.806 ] 00:17:00.806 }, 00:17:00.806 { 00:17:00.806 "subsystem": "nbd", 00:17:00.806 "config": [] 00:17:00.806 } 00:17:00.806 ] 00:17:00.806 }' 00:17:00.806 [2024-07-11 06:03:16.603871] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:00.806 [2024-07-11 06:03:16.604043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76627 ] 00:17:01.065 [2024-07-11 06:03:16.774752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.323 [2024-07-11 06:03:16.997174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.323 [2024-07-11 06:03:17.234109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:01.580 [2024-07-11 06:03:17.321366] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.580 [2024-07-11 06:03:17.321542] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:01.847 06:03:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.847 06:03:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:01.847 06:03:17 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:01.847 Running I/O for 10 seconds... 00:17:11.817 00:17:11.817 Latency(us) 00:17:11.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.817 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:11.817 Verification LBA range: start 0x0 length 0x2000 00:17:11.817 TLSTESTn1 : 10.04 3034.12 11.85 0.00 0.00 42097.88 7477.06 27286.81 00:17:11.817 =================================================================================================================== 00:17:11.817 Total : 3034.12 11.85 0.00 0.00 42097.88 7477.06 27286.81 00:17:11.817 0 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 76627 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76627 ']' 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76627 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76627 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76627' 00:17:11.817 killing process with pid 76627 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76627 00:17:11.817 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.817 00:17:11.817 Latency(us) 00:17:11.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.817 =================================================================================================================== 00:17:11.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:11.817 06:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76627 00:17:11.817 [2024-07-11 06:03:27.735469] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 76595 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76595 ']' 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76595 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76595 00:17:13.191 killing process with pid 76595 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76595' 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76595 00:17:13.191 [2024-07-11 06:03:28.827837] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:13.191 06:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76595 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76783 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76783 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76783 ']' 00:17:14.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.125 06:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:14.383 [2024-07-11 06:03:30.064000] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:14.383 [2024-07-11 06:03:30.064820] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.383 [2024-07-11 06:03:30.235479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.640 [2024-07-11 06:03:30.454052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.640 [2024-07-11 06:03:30.454125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.640 [2024-07-11 06:03:30.454148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.640 [2024-07-11 06:03:30.454161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.640 [2024-07-11 06:03:30.454171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.640 [2024-07-11 06:03:30.454206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.898 [2024-07-11 06:03:30.615319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:15.155 06:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.155 06:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:15.155 06:03:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.155 06:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.155 06:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:15.155 06:03:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.155 06:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.rZdvdJgnam 00:17:15.155 06:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rZdvdJgnam 00:17:15.155 06:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:15.412 [2024-07-11 06:03:31.177374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.412 06:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:15.670 06:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:15.927 [2024-07-11 06:03:31.605473] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:15.927 [2024-07-11 06:03:31.605768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.927 06:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:16.185 malloc0 00:17:16.185 06:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:16.442 06:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rZdvdJgnam 00:17:16.442 [2024-07-11 06:03:32.330796] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:16.442 06:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:16.442 06:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=76838 00:17:16.442 06:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.442 06:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 76838 /var/tmp/bdevperf.sock 00:17:16.442 06:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76838 ']' 00:17:16.442 06:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.442 06:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.443 06:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.443 06:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.443 06:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.701 [2024-07-11 06:03:32.469168] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:16.701 [2024-07-11 06:03:32.469382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76838 ] 00:17:16.959 [2024-07-11 06:03:32.650382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.959 [2024-07-11 06:03:32.854784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.217 [2024-07-11 06:03:33.019132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:17.475 06:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.475 06:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:17.475 06:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rZdvdJgnam 00:17:17.744 06:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:18.002 [2024-07-11 06:03:33.794524] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.002 nvme0n1 00:17:18.002 06:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:18.260 Running I/O for 1 seconds... 00:17:19.193 00:17:19.193 Latency(us) 00:17:19.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.193 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:19.193 Verification LBA range: start 0x0 length 0x2000 00:17:19.193 nvme0n1 : 1.04 2948.52 11.52 0.00 0.00 42833.53 11439.01 28120.90 00:17:19.193 =================================================================================================================== 00:17:19.193 Total : 2948.52 11.52 0.00 0.00 42833.53 11439.01 28120.90 00:17:19.193 0 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 76838 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76838 ']' 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76838 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76838 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:19.193 killing process with pid 76838 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76838' 00:17:19.193 Received shutdown signal, test time was about 1.000000 seconds 00:17:19.193 00:17:19.193 Latency(us) 00:17:19.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.193 =================================================================================================================== 00:17:19.193 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76838 00:17:19.193 06:03:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76838 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 76783 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76783 ']' 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76783 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76783 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:20.566 killing process with pid 76783 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76783' 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76783 00:17:20.566 [2024-07-11 06:03:36.135890] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:20.566 06:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76783 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76909 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76909 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76909 ']' 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.501 06:03:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.501 [2024-07-11 06:03:37.327909] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:21.501 [2024-07-11 06:03:37.328069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.759 [2024-07-11 06:03:37.485487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.759 [2024-07-11 06:03:37.647790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.759 [2024-07-11 06:03:37.647863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.759 [2024-07-11 06:03:37.647878] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.759 [2024-07-11 06:03:37.647890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.759 [2024-07-11 06:03:37.647900] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.759 [2024-07-11 06:03:37.647936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.017 [2024-07-11 06:03:37.814028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.583 [2024-07-11 06:03:38.306911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.583 malloc0 00:17:22.583 [2024-07-11 06:03:38.358154] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:22.583 [2024-07-11 06:03:38.358426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=76940 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 76940 /var/tmp/bdevperf.sock 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76940 ']' 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.583 06:03:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.583 [2024-07-11 06:03:38.492153] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:22.583 [2024-07-11 06:03:38.492360] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76940 ] 00:17:22.841 [2024-07-11 06:03:38.666816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.098 [2024-07-11 06:03:38.876185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.357 [2024-07-11 06:03:39.047350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:23.615 06:03:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.615 06:03:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:23.615 06:03:39 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rZdvdJgnam 00:17:23.872 06:03:39 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:24.139 [2024-07-11 06:03:39.818664] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.139 nvme0n1 00:17:24.139 06:03:39 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:24.139 Running I/O for 1 seconds... 00:17:25.528 00:17:25.528 Latency(us) 00:17:25.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.528 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:25.528 Verification LBA range: start 0x0 length 0x2000 00:17:25.528 nvme0n1 : 1.03 3046.61 11.90 0.00 0.00 41292.19 8757.99 26333.56 00:17:25.528 =================================================================================================================== 00:17:25.528 Total : 3046.61 11.90 0.00 0.00 41292.19 8757.99 26333.56 00:17:25.528 0 00:17:25.528 06:03:41 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:17:25.528 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.528 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.528 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.528 06:03:41 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:17:25.528 "subsystems": [ 00:17:25.528 { 00:17:25.528 "subsystem": "keyring", 00:17:25.528 "config": [ 00:17:25.528 { 00:17:25.528 "method": "keyring_file_add_key", 00:17:25.528 "params": { 00:17:25.528 "name": "key0", 00:17:25.528 "path": "/tmp/tmp.rZdvdJgnam" 00:17:25.528 } 00:17:25.528 } 00:17:25.528 ] 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "subsystem": "iobuf", 00:17:25.528 "config": [ 00:17:25.528 { 00:17:25.528 "method": "iobuf_set_options", 00:17:25.528 "params": { 00:17:25.528 "small_pool_count": 8192, 00:17:25.528 "large_pool_count": 1024, 00:17:25.528 "small_bufsize": 8192, 00:17:25.528 "large_bufsize": 135168 00:17:25.528 } 00:17:25.528 } 00:17:25.528 ] 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "subsystem": "sock", 00:17:25.528 "config": [ 00:17:25.528 { 00:17:25.528 "method": "sock_set_default_impl", 00:17:25.528 "params": { 00:17:25.528 "impl_name": "uring" 00:17:25.528 } 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "method": "sock_impl_set_options", 00:17:25.528 "params": { 00:17:25.528 "impl_name": "ssl", 00:17:25.528 "recv_buf_size": 4096, 00:17:25.528 "send_buf_size": 4096, 00:17:25.528 "enable_recv_pipe": true, 00:17:25.528 "enable_quickack": false, 00:17:25.528 "enable_placement_id": 0, 00:17:25.528 "enable_zerocopy_send_server": true, 00:17:25.528 "enable_zerocopy_send_client": false, 00:17:25.528 "zerocopy_threshold": 0, 00:17:25.528 "tls_version": 0, 00:17:25.528 "enable_ktls": false 00:17:25.528 } 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "method": "sock_impl_set_options", 00:17:25.528 "params": { 00:17:25.528 "impl_name": "posix", 00:17:25.528 "recv_buf_size": 2097152, 00:17:25.528 "send_buf_size": 2097152, 00:17:25.528 "enable_recv_pipe": true, 00:17:25.528 "enable_quickack": false, 00:17:25.528 "enable_placement_id": 0, 00:17:25.528 "enable_zerocopy_send_server": true, 00:17:25.528 "enable_zerocopy_send_client": false, 00:17:25.528 "zerocopy_threshold": 0, 00:17:25.528 "tls_version": 0, 00:17:25.528 "enable_ktls": false 00:17:25.528 } 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "method": "sock_impl_set_options", 00:17:25.528 "params": { 00:17:25.528 "impl_name": "uring", 00:17:25.528 "recv_buf_size": 2097152, 00:17:25.528 "send_buf_size": 2097152, 00:17:25.528 "enable_recv_pipe": true, 00:17:25.528 "enable_quickack": false, 00:17:25.528 "enable_placement_id": 0, 00:17:25.528 "enable_zerocopy_send_server": false, 00:17:25.528 "enable_zerocopy_send_client": false, 00:17:25.528 "zerocopy_threshold": 0, 00:17:25.528 "tls_version": 0, 00:17:25.528 "enable_ktls": false 00:17:25.528 } 00:17:25.528 } 00:17:25.528 ] 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "subsystem": "vmd", 00:17:25.528 "config": [] 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "subsystem": "accel", 00:17:25.528 "config": [ 00:17:25.528 { 00:17:25.528 "method": "accel_set_options", 00:17:25.528 "params": { 00:17:25.528 "small_cache_size": 128, 00:17:25.528 "large_cache_size": 16, 00:17:25.528 "task_count": 2048, 00:17:25.528 "sequence_count": 2048, 00:17:25.528 "buf_count": 2048 00:17:25.528 } 00:17:25.528 } 00:17:25.528 ] 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "subsystem": "bdev", 00:17:25.528 "config": [ 00:17:25.528 { 00:17:25.528 "method": "bdev_set_options", 00:17:25.528 "params": { 00:17:25.528 "bdev_io_pool_size": 65535, 00:17:25.528 "bdev_io_cache_size": 256, 00:17:25.528 "bdev_auto_examine": true, 00:17:25.528 "iobuf_small_cache_size": 128, 00:17:25.528 "iobuf_large_cache_size": 16 00:17:25.528 } 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "method": "bdev_raid_set_options", 00:17:25.528 "params": { 00:17:25.528 "process_window_size_kb": 1024 00:17:25.528 } 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "method": "bdev_iscsi_set_options", 00:17:25.528 "params": { 00:17:25.528 "timeout_sec": 30 00:17:25.528 } 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "method": "bdev_nvme_set_options", 00:17:25.528 "params": { 00:17:25.528 "action_on_timeout": "none", 00:17:25.528 "timeout_us": 0, 00:17:25.528 "timeout_admin_us": 0, 00:17:25.528 "keep_alive_timeout_ms": 10000, 00:17:25.528 "arbitration_burst": 0, 00:17:25.528 "low_priority_weight": 0, 00:17:25.528 "medium_priority_weight": 0, 00:17:25.528 "high_priority_weight": 0, 00:17:25.528 "nvme_adminq_poll_period_us": 10000, 00:17:25.528 "nvme_ioq_poll_period_us": 0, 00:17:25.528 "io_queue_requests": 0, 00:17:25.528 "delay_cmd_submit": true, 00:17:25.528 "transport_retry_count": 4, 00:17:25.528 "bdev_retry_count": 3, 00:17:25.528 "transport_ack_timeout": 0, 00:17:25.528 "ctrlr_loss_timeout_sec": 0, 00:17:25.528 "reconnect_delay_sec": 0, 00:17:25.528 "fast_io_fail_timeout_sec": 0, 00:17:25.528 "disable_auto_failback": false, 00:17:25.528 "generate_uuids": false, 00:17:25.528 "transport_tos": 0, 00:17:25.528 "nvme_error_stat": false, 00:17:25.528 "rdma_srq_size": 0, 00:17:25.528 "io_path_stat": false, 00:17:25.528 "allow_accel_sequence": false, 00:17:25.528 "rdma_max_cq_size": 0, 00:17:25.528 "rdma_cm_event_timeout_ms": 0, 00:17:25.528 "dhchap_digests": [ 00:17:25.528 "sha256", 00:17:25.528 "sha384", 00:17:25.528 "sha512" 00:17:25.528 ], 00:17:25.528 "dhchap_dhgroups": [ 00:17:25.528 "null", 00:17:25.528 "ffdhe2048", 00:17:25.528 "ffdhe3072", 00:17:25.528 "ffdhe4096", 00:17:25.528 "ffdhe6144", 00:17:25.528 "ffdhe8192" 00:17:25.528 ] 00:17:25.528 } 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "method": "bdev_nvme_set_hotplug", 00:17:25.528 "params": { 00:17:25.528 "period_us": 100000, 00:17:25.528 "enable": false 00:17:25.528 } 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "method": "bdev_malloc_create", 00:17:25.528 "params": { 00:17:25.528 "name": "malloc0", 00:17:25.528 "num_blocks": 8192, 00:17:25.528 "block_size": 4096, 00:17:25.528 "physical_block_size": 4096, 00:17:25.528 "uuid": "02717bfd-28e2-4ff7-a41c-c8f67c98ea58", 00:17:25.528 "optimal_io_boundary": 0 00:17:25.528 } 00:17:25.528 }, 00:17:25.528 { 00:17:25.528 "method": "bdev_wait_for_examine" 00:17:25.528 } 00:17:25.528 ] 00:17:25.528 }, 00:17:25.528 { 00:17:25.529 "subsystem": "nbd", 00:17:25.529 "config": [] 00:17:25.529 }, 00:17:25.529 { 00:17:25.529 "subsystem": "scheduler", 00:17:25.529 "config": [ 00:17:25.529 { 00:17:25.529 "method": "framework_set_scheduler", 00:17:25.529 "params": { 00:17:25.529 "name": "static" 00:17:25.529 } 00:17:25.529 } 00:17:25.529 ] 00:17:25.529 }, 00:17:25.529 { 00:17:25.529 "subsystem": "nvmf", 00:17:25.529 "config": [ 00:17:25.529 { 00:17:25.529 "method": "nvmf_set_config", 00:17:25.529 "params": { 00:17:25.529 "discovery_filter": "match_any", 00:17:25.529 "admin_cmd_passthru": { 00:17:25.529 "identify_ctrlr": false 00:17:25.529 } 00:17:25.529 } 00:17:25.529 }, 00:17:25.529 { 00:17:25.529 "method": "nvmf_set_max_subsystems", 00:17:25.529 "params": { 00:17:25.529 "max_subsystems": 1024 00:17:25.529 } 00:17:25.529 }, 00:17:25.529 { 00:17:25.529 "method": "nvmf_set_crdt", 00:17:25.529 "params": { 00:17:25.529 "crdt1": 0, 00:17:25.529 "crdt2": 0, 00:17:25.529 "crdt3": 0 00:17:25.529 } 00:17:25.529 }, 00:17:25.529 { 00:17:25.529 "method": "nvmf_create_transport", 00:17:25.529 "params": { 00:17:25.529 "trtype": "TCP", 00:17:25.529 "max_queue_depth": 128, 00:17:25.529 "max_io_qpairs_per_ctrlr": 127, 00:17:25.529 "in_capsule_data_size": 4096, 00:17:25.529 "max_io_size": 131072, 00:17:25.529 "io_unit_size": 131072, 00:17:25.529 "max_aq_depth": 128, 00:17:25.529 "num_shared_buffers": 511, 00:17:25.529 "buf_cache_size": 4294967295, 00:17:25.529 "dif_insert_or_strip": false, 00:17:25.529 "zcopy": false, 00:17:25.529 "c2h_success": false, 00:17:25.529 "sock_priority": 0, 00:17:25.529 "abort_timeout_sec": 1, 00:17:25.529 "ack_timeout": 0, 00:17:25.529 "data_wr_pool_size": 0 00:17:25.529 } 00:17:25.529 }, 00:17:25.529 { 00:17:25.529 "method": "nvmf_create_subsystem", 00:17:25.529 "params": { 00:17:25.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.529 "allow_any_host": false, 00:17:25.529 "serial_number": "00000000000000000000", 00:17:25.529 "model_number": "SPDK bdev Controller", 00:17:25.529 "max_namespaces": 32, 00:17:25.529 "min_cntlid": 1, 00:17:25.529 "max_cntlid": 65519, 00:17:25.529 "ana_reporting": false 00:17:25.529 } 00:17:25.529 }, 00:17:25.529 { 00:17:25.529 "method": "nvmf_subsystem_add_host", 00:17:25.529 "params": { 00:17:25.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.529 "host": "nqn.2016-06.io.spdk:host1", 00:17:25.529 "psk": "key0" 00:17:25.529 } 00:17:25.529 }, 00:17:25.529 { 00:17:25.529 "method": "nvmf_subsystem_add_ns", 00:17:25.529 "params": { 00:17:25.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.529 "namespace": { 00:17:25.529 "nsid": 1, 00:17:25.529 "bdev_name": "malloc0", 00:17:25.529 "nguid": "02717BFD28E24FF7A41CC8F67C98EA58", 00:17:25.529 "uuid": "02717bfd-28e2-4ff7-a41c-c8f67c98ea58", 00:17:25.529 "no_auto_visible": false 00:17:25.529 } 00:17:25.529 } 00:17:25.529 }, 00:17:25.529 { 00:17:25.529 "method": "nvmf_subsystem_add_listener", 00:17:25.529 "params": { 00:17:25.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.529 "listen_address": { 00:17:25.529 "trtype": "TCP", 00:17:25.529 "adrfam": "IPv4", 00:17:25.529 "traddr": "10.0.0.2", 00:17:25.529 "trsvcid": "4420" 00:17:25.529 }, 00:17:25.529 "secure_channel": true 00:17:25.529 } 00:17:25.529 } 00:17:25.529 ] 00:17:25.529 } 00:17:25.529 ] 00:17:25.529 }' 00:17:25.529 06:03:41 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:25.788 06:03:41 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:17:25.788 "subsystems": [ 00:17:25.788 { 00:17:25.788 "subsystem": "keyring", 00:17:25.788 "config": [ 00:17:25.788 { 00:17:25.788 "method": "keyring_file_add_key", 00:17:25.788 "params": { 00:17:25.788 "name": "key0", 00:17:25.788 "path": "/tmp/tmp.rZdvdJgnam" 00:17:25.788 } 00:17:25.788 } 00:17:25.788 ] 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "subsystem": "iobuf", 00:17:25.788 "config": [ 00:17:25.788 { 00:17:25.788 "method": "iobuf_set_options", 00:17:25.788 "params": { 00:17:25.788 "small_pool_count": 8192, 00:17:25.788 "large_pool_count": 1024, 00:17:25.788 "small_bufsize": 8192, 00:17:25.788 "large_bufsize": 135168 00:17:25.788 } 00:17:25.788 } 00:17:25.788 ] 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "subsystem": "sock", 00:17:25.788 "config": [ 00:17:25.788 { 00:17:25.788 "method": "sock_set_default_impl", 00:17:25.788 "params": { 00:17:25.788 "impl_name": "uring" 00:17:25.788 } 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "method": "sock_impl_set_options", 00:17:25.788 "params": { 00:17:25.788 "impl_name": "ssl", 00:17:25.788 "recv_buf_size": 4096, 00:17:25.788 "send_buf_size": 4096, 00:17:25.788 "enable_recv_pipe": true, 00:17:25.788 "enable_quickack": false, 00:17:25.788 "enable_placement_id": 0, 00:17:25.788 "enable_zerocopy_send_server": true, 00:17:25.788 "enable_zerocopy_send_client": false, 00:17:25.788 "zerocopy_threshold": 0, 00:17:25.788 "tls_version": 0, 00:17:25.788 "enable_ktls": false 00:17:25.788 } 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "method": "sock_impl_set_options", 00:17:25.788 "params": { 00:17:25.788 "impl_name": "posix", 00:17:25.788 "recv_buf_size": 2097152, 00:17:25.788 "send_buf_size": 2097152, 00:17:25.788 "enable_recv_pipe": true, 00:17:25.788 "enable_quickack": false, 00:17:25.788 "enable_placement_id": 0, 00:17:25.788 "enable_zerocopy_send_server": true, 00:17:25.788 "enable_zerocopy_send_client": false, 00:17:25.788 "zerocopy_threshold": 0, 00:17:25.788 "tls_version": 0, 00:17:25.788 "enable_ktls": false 00:17:25.788 } 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "method": "sock_impl_set_options", 00:17:25.788 "params": { 00:17:25.788 "impl_name": "uring", 00:17:25.788 "recv_buf_size": 2097152, 00:17:25.788 "send_buf_size": 2097152, 00:17:25.788 "enable_recv_pipe": true, 00:17:25.788 "enable_quickack": false, 00:17:25.788 "enable_placement_id": 0, 00:17:25.788 "enable_zerocopy_send_server": false, 00:17:25.788 "enable_zerocopy_send_client": false, 00:17:25.788 "zerocopy_threshold": 0, 00:17:25.788 "tls_version": 0, 00:17:25.788 "enable_ktls": false 00:17:25.788 } 00:17:25.788 } 00:17:25.788 ] 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "subsystem": "vmd", 00:17:25.788 "config": [] 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "subsystem": "accel", 00:17:25.788 "config": [ 00:17:25.788 { 00:17:25.788 "method": "accel_set_options", 00:17:25.788 "params": { 00:17:25.788 "small_cache_size": 128, 00:17:25.788 "large_cache_size": 16, 00:17:25.788 "task_count": 2048, 00:17:25.788 "sequence_count": 2048, 00:17:25.788 "buf_count": 2048 00:17:25.788 } 00:17:25.788 } 00:17:25.788 ] 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "subsystem": "bdev", 00:17:25.788 "config": [ 00:17:25.788 { 00:17:25.788 "method": "bdev_set_options", 00:17:25.788 "params": { 00:17:25.788 "bdev_io_pool_size": 65535, 00:17:25.788 "bdev_io_cache_size": 256, 00:17:25.788 "bdev_auto_examine": true, 00:17:25.788 "iobuf_small_cache_size": 128, 00:17:25.788 "iobuf_large_cache_size": 16 00:17:25.788 } 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "method": "bdev_raid_set_options", 00:17:25.788 "params": { 00:17:25.788 "process_window_size_kb": 1024 00:17:25.788 } 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "method": "bdev_iscsi_set_options", 00:17:25.788 "params": { 00:17:25.788 "timeout_sec": 30 00:17:25.788 } 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "method": "bdev_nvme_set_options", 00:17:25.788 "params": { 00:17:25.788 "action_on_timeout": "none", 00:17:25.788 "timeout_us": 0, 00:17:25.788 "timeout_admin_us": 0, 00:17:25.788 "keep_alive_timeout_ms": 10000, 00:17:25.788 "arbitration_burst": 0, 00:17:25.788 "low_priority_weight": 0, 00:17:25.788 "medium_priority_weight": 0, 00:17:25.788 "high_priority_weight": 0, 00:17:25.788 "nvme_adminq_poll_period_us": 10000, 00:17:25.788 "nvme_ioq_poll_period_us": 0, 00:17:25.788 "io_queue_requests": 512, 00:17:25.788 "delay_cmd_submit": true, 00:17:25.788 "transport_retry_count": 4, 00:17:25.788 "bdev_retry_count": 3, 00:17:25.788 "transport_ack_timeout": 0, 00:17:25.788 "ctrlr_loss_timeout_sec": 0, 00:17:25.788 "reconnect_delay_sec": 0, 00:17:25.788 "fast_io_fail_timeout_sec": 0, 00:17:25.788 "disable_auto_failback": false, 00:17:25.788 "generate_uuids": false, 00:17:25.788 "transport_tos": 0, 00:17:25.788 "nvme_error_stat": false, 00:17:25.788 "rdma_srq_size": 0, 00:17:25.788 "io_path_stat": false, 00:17:25.788 "allow_accel_sequence": false, 00:17:25.788 "rdma_max_cq_size": 0, 00:17:25.788 "rdma_cm_event_timeout_ms": 0, 00:17:25.788 "dhchap_digests": [ 00:17:25.788 "sha256", 00:17:25.788 "sha384", 00:17:25.788 "sha512" 00:17:25.788 ], 00:17:25.788 "dhchap_dhgroups": [ 00:17:25.788 "null", 00:17:25.788 "ffdhe2048", 00:17:25.788 "ffdhe3072", 00:17:25.788 "ffdhe4096", 00:17:25.788 "ffdhe6144", 00:17:25.788 "ffdhe8192" 00:17:25.788 ] 00:17:25.788 } 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "method": "bdev_nvme_attach_controller", 00:17:25.788 "params": { 00:17:25.788 "name": "nvme0", 00:17:25.788 "trtype": "TCP", 00:17:25.788 "adrfam": "IPv4", 00:17:25.788 "traddr": "10.0.0.2", 00:17:25.788 "trsvcid": "4420", 00:17:25.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.788 "prchk_reftag": false, 00:17:25.788 "prchk_guard": false, 00:17:25.788 "ctrlr_loss_timeout_sec": 0, 00:17:25.788 "reconnect_delay_sec": 0, 00:17:25.788 "fast_io_fail_timeout_sec": 0, 00:17:25.788 "psk": "key0", 00:17:25.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:25.788 "hdgst": false, 00:17:25.788 "ddgst": false 00:17:25.788 } 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "method": "bdev_nvme_set_hotplug", 00:17:25.788 "params": { 00:17:25.788 "period_us": 100000, 00:17:25.788 "enable": false 00:17:25.788 } 00:17:25.788 }, 00:17:25.788 { 00:17:25.788 "method": "bdev_enable_histogram", 00:17:25.788 "params": { 00:17:25.788 "name": "nvme0n1", 00:17:25.788 "enable": true 00:17:25.788 } 00:17:25.788 }, 00:17:25.788 { 00:17:25.789 "method": "bdev_wait_for_examine" 00:17:25.789 } 00:17:25.789 ] 00:17:25.789 }, 00:17:25.789 { 00:17:25.789 "subsystem": "nbd", 00:17:25.789 "config": [] 00:17:25.789 } 00:17:25.789 ] 00:17:25.789 }' 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 76940 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76940 ']' 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76940 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76940 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:25.789 killing process with pid 76940 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76940' 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76940 00:17:25.789 Received shutdown signal, test time was about 1.000000 seconds 00:17:25.789 00:17:25.789 Latency(us) 00:17:25.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.789 =================================================================================================================== 00:17:25.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.789 06:03:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76940 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 76909 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76909 ']' 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76909 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76909 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76909' 00:17:26.722 killing process with pid 76909 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76909 00:17:26.722 06:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76909 00:17:28.096 06:03:43 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:17:28.096 06:03:43 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:17:28.096 "subsystems": [ 00:17:28.096 { 00:17:28.096 "subsystem": "keyring", 00:17:28.096 "config": [ 00:17:28.096 { 00:17:28.096 "method": "keyring_file_add_key", 00:17:28.096 "params": { 00:17:28.096 "name": "key0", 00:17:28.096 "path": "/tmp/tmp.rZdvdJgnam" 00:17:28.096 } 00:17:28.096 } 00:17:28.096 ] 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "subsystem": "iobuf", 00:17:28.096 "config": [ 00:17:28.096 { 00:17:28.096 "method": "iobuf_set_options", 00:17:28.096 "params": { 00:17:28.096 "small_pool_count": 8192, 00:17:28.096 "large_pool_count": 1024, 00:17:28.096 "small_bufsize": 8192, 00:17:28.096 "large_bufsize": 135168 00:17:28.096 } 00:17:28.096 } 00:17:28.096 ] 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "subsystem": "sock", 00:17:28.096 "config": [ 00:17:28.096 { 00:17:28.096 "method": "sock_set_default_impl", 00:17:28.096 "params": { 00:17:28.096 "impl_name": "uring" 00:17:28.096 } 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "method": "sock_impl_set_options", 00:17:28.096 "params": { 00:17:28.096 "impl_name": "ssl", 00:17:28.096 "recv_buf_size": 4096, 00:17:28.096 "send_buf_size": 4096, 00:17:28.096 "enable_recv_pipe": true, 00:17:28.096 "enable_quickack": false, 00:17:28.096 "enable_placement_id": 0, 00:17:28.096 "enable_zerocopy_send_server": true, 00:17:28.096 "enable_zerocopy_send_client": false, 00:17:28.096 "zerocopy_threshold": 0, 00:17:28.096 "tls_version": 0, 00:17:28.096 "enable_ktls": false 00:17:28.096 } 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "method": "sock_impl_set_options", 00:17:28.096 "params": { 00:17:28.096 "impl_name": "posix", 00:17:28.096 "recv_buf_size": 2097152, 00:17:28.096 "send_buf_size": 2097152, 00:17:28.096 "enable_recv_pipe": true, 00:17:28.096 "enable_quickack": false, 00:17:28.096 "enable_placement_id": 0, 00:17:28.096 "enable_zerocopy_send_server": true, 00:17:28.096 "enable_zerocopy_send_client": false, 00:17:28.096 "zerocopy_threshold": 0, 00:17:28.096 "tls_version": 0, 00:17:28.096 "enable_ktls": false 00:17:28.096 } 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "method": "sock_impl_set_options", 00:17:28.096 "params": { 00:17:28.096 "impl_name": "uring", 00:17:28.096 "recv_buf_size": 2097152, 00:17:28.096 "send_buf_size": 2097152, 00:17:28.096 "enable_recv_pipe": true, 00:17:28.096 "enable_quickack": false, 00:17:28.096 "enable_placement_id": 0, 00:17:28.096 "enable_zerocopy_send_server": false, 00:17:28.096 "enable_zerocopy_send_client": false, 00:17:28.096 "zerocopy_threshold": 0, 00:17:28.096 "tls_version": 0, 00:17:28.096 "enable_ktls": false 00:17:28.096 } 00:17:28.096 } 00:17:28.096 ] 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "subsystem": "vmd", 00:17:28.096 "config": [] 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "subsystem": "accel", 00:17:28.096 "config": [ 00:17:28.096 { 00:17:28.096 "method": "accel_set_options", 00:17:28.096 "params": { 00:17:28.096 "small_cache_size": 128, 00:17:28.096 "large_cache_size": 16, 00:17:28.096 "task_count": 2048, 00:17:28.096 "sequence_count": 2048, 00:17:28.096 "buf_count": 2048 00:17:28.096 } 00:17:28.096 } 00:17:28.096 ] 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "subsystem": "bdev", 00:17:28.096 "config": [ 00:17:28.096 { 00:17:28.096 "method": "bdev_set_options", 00:17:28.096 "params": { 00:17:28.096 "bdev_io_pool_size": 65535, 00:17:28.096 "bdev_io_cache_size": 256, 00:17:28.096 "bdev_auto_examine": true, 00:17:28.096 "iobuf_small_cache_size": 128, 00:17:28.096 "iobuf_large_cache_size": 16 00:17:28.096 } 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "method": "bdev_raid_set_options", 00:17:28.096 "params": { 00:17:28.096 "process_window_size_kb": 1024 00:17:28.096 } 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "method": "bdev_iscsi_set_options", 00:17:28.096 "params": { 00:17:28.096 "timeout_sec": 30 00:17:28.096 } 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "method": "bdev_nvme_set_options", 00:17:28.096 "params": { 00:17:28.096 "action_on_timeout": "none", 00:17:28.096 "timeout_us": 0, 00:17:28.096 "timeout_admin_us": 0, 00:17:28.096 "keep_alive_timeout_ms": 10000, 00:17:28.096 "arbitration_burst": 0, 00:17:28.096 "low_priority_weight": 0, 00:17:28.096 "medium_priority_weight": 0, 00:17:28.096 "high_priority_weight": 0, 00:17:28.096 "nvme_adminq_poll_period_us": 10000, 00:17:28.096 "nvme_ioq_poll_period_us": 0, 00:17:28.096 "io_queue_requests": 0, 00:17:28.096 "delay_cmd_submit": true, 00:17:28.096 "transport_retry_count": 4, 00:17:28.096 "bdev_retry_count": 3, 00:17:28.096 "transport_ack_timeout": 0, 00:17:28.096 "ctrlr_loss_timeout_sec": 0, 00:17:28.096 "reconnect_delay_sec": 0, 00:17:28.096 "fast_io_fail_timeout_sec": 0, 00:17:28.096 "disable_auto_failback": false, 00:17:28.096 "generate_uuids": false, 00:17:28.096 "transport_tos": 0, 00:17:28.096 "nvme_error_stat": false, 00:17:28.096 "rdma_srq_size": 0, 00:17:28.096 "io_path_stat": false, 00:17:28.096 "allow_accel_sequence": false, 00:17:28.096 "rdma_max_cq_size": 0, 00:17:28.096 "rdma_cm_event_timeout_ms": 0, 00:17:28.096 "dhchap_digests": [ 00:17:28.096 "sha256", 00:17:28.096 "sha384", 00:17:28.096 "sha512" 00:17:28.096 ], 00:17:28.096 "dhchap_dhgroups": [ 00:17:28.096 "null", 00:17:28.096 "ffdhe2048", 00:17:28.096 "ffdhe3072", 00:17:28.096 "ffdhe4096", 00:17:28.096 "ffdhe6144", 00:17:28.096 "ffdhe8192" 00:17:28.096 ] 00:17:28.096 } 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "method": "bdev_nvme_set_hotplug", 00:17:28.096 "params": { 00:17:28.096 "period_us": 100000, 00:17:28.096 "enable": false 00:17:28.096 } 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "method": "bdev_malloc_create", 00:17:28.096 "params": { 00:17:28.096 "name": "malloc0", 00:17:28.096 "num_blocks": 8192, 00:17:28.096 "block_size": 4096, 00:17:28.096 "physical_block_size": 4096, 00:17:28.096 "uuid": "02717bfd-28e2-4ff7-a41c-c8f67c98ea58", 00:17:28.096 "optimal_io_boundary": 0 00:17:28.096 } 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "method": "bdev_wait_for_examine" 00:17:28.096 } 00:17:28.096 ] 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "subsystem": "nbd", 00:17:28.096 "config": [] 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "subsystem": "scheduler", 00:17:28.096 "config": [ 00:17:28.096 { 00:17:28.096 "method": "framework_set_scheduler", 00:17:28.096 "params": { 00:17:28.096 "name": "static" 00:17:28.096 } 00:17:28.096 } 00:17:28.096 ] 00:17:28.096 }, 00:17:28.096 { 00:17:28.096 "subsystem": "nvmf", 00:17:28.097 "config": [ 00:17:28.097 { 00:17:28.097 "method": "nvmf_set_config", 00:17:28.097 "params": { 00:17:28.097 "discovery_filter": "match_any", 00:17:28.097 "admin_cmd_passthru": { 00:17:28.097 "identify_ctrlr": false 00:17:28.097 } 00:17:28.097 } 00:17:28.097 }, 00:17:28.097 { 00:17:28.097 "method": "nvmf_set_max_subsystems", 00:17:28.097 "params": { 00:17:28.097 "max_subsystems": 1024 00:17:28.097 } 00:17:28.097 }, 00:17:28.097 { 00:17:28.097 "method": "nvmf_set_crdt", 00:17:28.097 "params": { 00:17:28.097 "crdt1": 0, 00:17:28.097 "crdt2": 0, 00:17:28.097 "crdt3": 0 00:17:28.097 } 00:17:28.097 }, 00:17:28.097 { 00:17:28.097 "method": "nvmf_create_transport", 00:17:28.097 "params": { 00:17:28.097 "trtype": "TCP", 00:17:28.097 "max_queue_depth": 128, 00:17:28.097 "max_io_qpairs_per_ctrlr": 127, 00:17:28.097 "in_capsule_data_size": 4096, 00:17:28.097 "max_io_size": 131072, 00:17:28.097 "io_unit_size": 131072, 00:17:28.097 "max_aq_depth": 128, 00:17:28.097 "num_shared_buffers": 511, 00:17:28.097 "buf_cache_size": 4294967295, 00:17:28.097 "dif_insert_or_strip": false, 00:17:28.097 "zcopy": false, 00:17:28.097 "c2h_success": false, 00:17:28.097 "sock_priority": 0, 00:17:28.097 "abort_timeout_sec": 1, 00:17:28.097 "ack_timeout": 0, 00:17:28.097 "data_wr_pool_size": 0 00:17:28.097 } 00:17:28.097 }, 00:17:28.097 { 00:17:28.097 "method": "nvmf_create_subsystem", 00:17:28.097 "params": { 00:17:28.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.097 "allow_any_host": false, 00:17:28.097 "serial_number": "00000000000000000000", 00:17:28.097 "model_number": "SPDK bdev Controller", 00:17:28.097 "max_namespaces": 32, 00:17:28.097 "min_cntlid": 1, 00:17:28.097 "max_cntlid": 65519, 00:17:28.097 "ana_reporting": false 00:17:28.097 } 00:17:28.097 }, 00:17:28.097 { 00:17:28.097 "method": "nvmf_subsystem_add_host", 00:17:28.097 "params": { 00:17:28.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.097 "host": "nqn.2016-06.io.spdk:host1", 00:17:28.097 "psk": "key0" 00:17:28.097 } 00:17:28.097 }, 00:17:28.097 { 00:17:28.097 "method": "nvmf_subsystem_add_ns", 00:17:28.097 "params": { 00:17:28.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.097 "namespace": { 00:17:28.097 "nsid": 1, 00:17:28.097 "bdev_name": "malloc0", 00:17:28.097 "nguid": "02717BFD28E24FF7A41CC8F67C98EA58", 00:17:28.097 "uuid": "02717bfd-28e2-4ff7-a41c-c8f67c98ea58", 00:17:28.097 "no_auto_visible": false 00:17:28.097 } 00:17:28.097 } 00:17:28.097 }, 00:17:28.097 { 00:17:28.097 "method": "nvmf_subsystem_add_listener", 00:17:28.097 "params": { 00:17:28.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.097 "listen_address": { 00:17:28.097 "trtype": "TCP", 00:17:28.097 "adrfam": "IPv4", 00:17:28.097 "traddr": "10.0.0.2", 00:17:28.097 "trsvcid": "4420" 00:17:28.097 }, 00:17:28.097 "secure_channel": true 00:17:28.097 } 00:17:28.097 } 00:17:28.097 ] 00:17:28.097 } 00:17:28.097 ] 00:17:28.097 }' 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77015 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77015 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77015 ']' 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.097 06:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.097 [2024-07-11 06:03:43.819167] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:28.097 [2024-07-11 06:03:43.819357] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.097 [2024-07-11 06:03:43.992382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.355 [2024-07-11 06:03:44.154461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.355 [2024-07-11 06:03:44.154557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.355 [2024-07-11 06:03:44.154574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.355 [2024-07-11 06:03:44.154587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.355 [2024-07-11 06:03:44.154597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.355 [2024-07-11 06:03:44.154756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.613 [2024-07-11 06:03:44.444413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:28.872 [2024-07-11 06:03:44.590487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.872 [2024-07-11 06:03:44.622463] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:28.872 [2024-07-11 06:03:44.634888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=77047 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 77047 /var/tmp/bdevperf.sock 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77047 ']' 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:28.872 06:03:44 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:17:28.872 "subsystems": [ 00:17:28.872 { 00:17:28.872 "subsystem": "keyring", 00:17:28.872 "config": [ 00:17:28.872 { 00:17:28.872 "method": "keyring_file_add_key", 00:17:28.872 "params": { 00:17:28.872 "name": "key0", 00:17:28.872 "path": "/tmp/tmp.rZdvdJgnam" 00:17:28.872 } 00:17:28.872 } 00:17:28.872 ] 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "subsystem": "iobuf", 00:17:28.872 "config": [ 00:17:28.872 { 00:17:28.872 "method": "iobuf_set_options", 00:17:28.872 "params": { 00:17:28.872 "small_pool_count": 8192, 00:17:28.872 "large_pool_count": 1024, 00:17:28.872 "small_bufsize": 8192, 00:17:28.872 "large_bufsize": 135168 00:17:28.872 } 00:17:28.872 } 00:17:28.872 ] 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "subsystem": "sock", 00:17:28.872 "config": [ 00:17:28.872 { 00:17:28.872 "method": "sock_set_default_impl", 00:17:28.872 "params": { 00:17:28.872 "impl_name": "uring" 00:17:28.872 } 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "method": "sock_impl_set_options", 00:17:28.872 "params": { 00:17:28.872 "impl_name": "ssl", 00:17:28.872 "recv_buf_size": 4096, 00:17:28.872 "send_buf_size": 4096, 00:17:28.872 "enable_recv_pipe": true, 00:17:28.872 "enable_quickack": false, 00:17:28.872 "enable_placement_id": 0, 00:17:28.872 "enable_zerocopy_send_server": true, 00:17:28.872 "enable_zerocopy_send_client": false, 00:17:28.872 "zerocopy_threshold": 0, 00:17:28.872 "tls_version": 0, 00:17:28.872 "enable_ktls": false 00:17:28.872 } 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "method": "sock_impl_set_options", 00:17:28.872 "params": { 00:17:28.872 "impl_name": "posix", 00:17:28.872 "recv_buf_size": 2097152, 00:17:28.872 "send_buf_size": 2097152, 00:17:28.872 "enable_recv_pipe": true, 00:17:28.872 "enable_quickack": false, 00:17:28.872 "enable_placement_id": 0, 00:17:28.872 "enable_zerocopy_send_server": true, 00:17:28.872 "enable_zerocopy_send_client": false, 00:17:28.872 "zerocopy_threshold": 0, 00:17:28.872 "tls_version": 0, 00:17:28.872 "enable_ktls": false 00:17:28.872 } 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "method": "sock_impl_set_options", 00:17:28.872 "params": { 00:17:28.872 "impl_name": "uring", 00:17:28.872 "recv_buf_size": 2097152, 00:17:28.872 "send_buf_size": 2097152, 00:17:28.872 "enable_recv_pipe": true, 00:17:28.872 "enable_quickack": false, 00:17:28.872 "enable_placement_id": 0, 00:17:28.872 "enable_zerocopy_send_server": false, 00:17:28.872 "enable_zerocopy_send_client": false, 00:17:28.872 "zerocopy_threshold": 0, 00:17:28.872 "tls_version": 0, 00:17:28.872 "enable_ktls": false 00:17:28.872 } 00:17:28.872 } 00:17:28.872 ] 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "subsystem": "vmd", 00:17:28.872 "config": [] 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "subsystem": "accel", 00:17:28.872 "config": [ 00:17:28.872 { 00:17:28.872 "method": "accel_set_options", 00:17:28.872 "params": { 00:17:28.872 "small_cache_size": 128, 00:17:28.872 "large_cache_size": 16, 00:17:28.872 "task_count": 2048, 00:17:28.872 "sequence_count": 2048, 00:17:28.872 "buf_count": 2048 00:17:28.872 } 00:17:28.872 } 00:17:28.872 ] 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "subsystem": "bdev", 00:17:28.872 "config": [ 00:17:28.872 { 00:17:28.872 "method": "bdev_set_options", 00:17:28.872 "params": { 00:17:28.872 "bdev_io_pool_size": 65535, 00:17:28.872 "bdev_io_cache_size": 256, 00:17:28.872 "bdev_auto_examine": true, 00:17:28.872 "iobuf_small_cache_size": 128, 00:17:28.872 "iobuf_large_cache_size": 16 00:17:28.872 } 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "method": "bdev_raid_set_options", 00:17:28.872 "params": { 00:17:28.872 "process_window_size_kb": 1024 00:17:28.872 } 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "method": "bdev_iscsi_set_options", 00:17:28.872 "params": { 00:17:28.872 "timeout_sec": 30 00:17:28.872 } 00:17:28.872 }, 00:17:28.872 { 00:17:28.872 "method": "bdev_nvme_set_options", 00:17:28.872 "params": { 00:17:28.872 "action_on_timeout": "none", 00:17:28.872 "timeout_us": 0, 00:17:28.872 "timeout_admin_us": 0, 00:17:28.872 "keep_alive_timeout_ms": 10000, 00:17:28.872 "arbitration_burst": 0, 00:17:28.872 "low_priority_weight": 0, 00:17:28.872 "medium_priority_weight": 0, 00:17:28.872 "high_priority_weight": 0, 00:17:28.872 "nvme_adminq_poll_period_us": 10000, 00:17:28.872 "nvme_ioq_poll_period_us": 0, 00:17:28.872 "io_queue_requests": 512, 00:17:28.872 "delay_cmd_submit": true, 00:17:28.872 "transport_retry_count": 4, 00:17:28.872 "bdev_retry_count": 3, 00:17:28.872 "transport_ack_timeout": 0, 00:17:28.872 "ctrlr_loss_timeout_sec": 0, 00:17:28.872 "reconnect_delay_sec": 0, 00:17:28.872 "fast_io_fail_timeout_sec": 0, 00:17:28.872 "disable_auto_failback": false, 00:17:28.872 "generate_uuids": false, 00:17:28.872 "transport_tos": 0, 00:17:28.872 "nvme_error_stat": false, 00:17:28.872 "rdma_srq_size": 0, 00:17:28.873 "io_path_stat": false, 00:17:28.873 "allow_accel_sequence": false, 00:17:28.873 "rdma_max_cq_size": 0, 00:17:28.873 "rdma_cm_event_timeout_ms": 0, 00:17:28.873 "dhchap_digests": [ 00:17:28.873 "sha256", 00:17:28.873 "sha384", 00:17:28.873 "sha512" 00:17:28.873 ], 00:17:28.873 "dhchap_dhgroups": [ 00:17:28.873 "null", 00:17:28.873 "ffdhe2048", 00:17:28.873 "ffdhe3072", 00:17:28.873 "ffdhe4096", 00:17:28.873 "ffdhe6144", 00:17:28.873 "ffdhe8192" 00:17:28.873 ] 00:17:28.873 } 00:17:28.873 }, 00:17:28.873 { 00:17:28.873 "method": "bdev_nvme_attach_controller", 00:17:28.873 "params": { 00:17:28.873 "name": "nvme0", 00:17:28.873 "trtype": "TCP", 00:17:28.873 "adrfam": "IPv4", 00:17:28.873 "traddr": "10.0.0.2", 00:17:28.873 "trsvcid": "4420", 00:17:28.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.873 "prchk_reftag": false, 00:17:28.873 "prchk_guard": false, 00:17:28.873 "ctrlr_loss_timeout_sec": 0, 00:17:28.873 "reconnect_delay_sec": 0, 00:17:28.873 "fast_io_fail_timeout_sec": 0, 00:17:28.873 "psk": "key0", 00:17:28.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.873 "hdgst": false, 00:17:28.873 "ddgst": false 00:17:28.873 } 00:17:28.873 }, 00:17:28.873 { 00:17:28.873 "method": "bdev_nvme_set_hotplug", 00:17:28.873 "params": { 00:17:28.873 "period_us": 100000, 00:17:28.873 "enable": false 00:17:28.873 } 00:17:28.873 }, 00:17:28.873 { 00:17:28.873 "method": "bdev_enable_histogram", 00:17:28.873 "params": { 00:17:28.873 "name": "nvme0n1", 00:17:28.873 "enable": true 00:17:28.873 } 00:17:28.873 }, 00:17:28.873 { 00:17:28.873 "method": "bdev_wait_for_examine" 00:17:28.873 } 00:17:28.873 ] 00:17:28.873 }, 00:17:28.873 { 00:17:28.873 "subsystem": "nbd", 00:17:28.873 "config": [] 00:17:28.873 } 00:17:28.873 ] 00:17:28.873 }' 00:17:29.131 [2024-07-11 06:03:44.815442] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:29.131 [2024-07-11 06:03:44.815618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77047 ] 00:17:29.131 [2024-07-11 06:03:44.991051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.389 [2024-07-11 06:03:45.214099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.647 [2024-07-11 06:03:45.472153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:29.905 [2024-07-11 06:03:45.574164] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.905 06:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.905 06:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:29.905 06:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:29.905 06:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:17:30.163 06:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.163 06:03:46 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:30.421 Running I/O for 1 seconds... 00:17:31.356 00:17:31.356 Latency(us) 00:17:31.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.356 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.356 Verification LBA range: start 0x0 length 0x2000 00:17:31.356 nvme0n1 : 1.02 3008.31 11.75 0.00 0.00 41963.41 703.77 27048.49 00:17:31.356 =================================================================================================================== 00:17:31.356 Total : 3008.31 11.75 0.00 0.00 41963.41 703.77 27048.49 00:17:31.356 0 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:31.356 nvmf_trace.0 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 77047 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77047 ']' 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77047 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77047 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:31.356 killing process with pid 77047 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77047' 00:17:31.356 Received shutdown signal, test time was about 1.000000 seconds 00:17:31.356 00:17:31.356 Latency(us) 00:17:31.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.356 =================================================================================================================== 00:17:31.356 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:31.356 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77047 00:17:31.615 06:03:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77047 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.549 rmmod nvme_tcp 00:17:32.549 rmmod nvme_fabrics 00:17:32.549 rmmod nvme_keyring 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 77015 ']' 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 77015 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77015 ']' 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77015 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77015 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:32.549 killing process with pid 77015 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77015' 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77015 00:17:32.549 06:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77015 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.M5vEkY7P4o /tmp/tmp.w223dVCb5h /tmp/tmp.rZdvdJgnam 00:17:33.925 00:17:33.925 real 1m43.137s 00:17:33.925 user 2m45.643s 00:17:33.925 sys 0m26.453s 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:33.925 ************************************ 00:17:33.925 END TEST nvmf_tls 00:17:33.925 ************************************ 00:17:33.925 06:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.925 06:03:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:33.925 06:03:49 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:33.925 06:03:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:33.925 06:03:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.926 06:03:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:33.926 ************************************ 00:17:33.926 START TEST nvmf_fips 00:17:33.926 ************************************ 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:33.926 * Looking for test storage... 00:17:33.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:17:33.926 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:17:34.185 Error setting digest 00:17:34.185 00521BB2997F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:34.185 00521BB2997F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:34.185 Cannot find device "nvmf_tgt_br" 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.185 Cannot find device "nvmf_tgt_br2" 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:34.185 Cannot find device "nvmf_tgt_br" 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:17:34.185 06:03:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:34.185 Cannot find device "nvmf_tgt_br2" 00:17:34.185 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:17:34.185 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:34.185 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:34.185 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.185 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:34.185 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.185 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:34.185 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.185 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.186 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.186 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.186 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:34.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:34.444 00:17:34.444 --- 10.0.0.2 ping statistics --- 00:17:34.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.444 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:34.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:34.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:17:34.444 00:17:34.444 --- 10.0.0.3 ping statistics --- 00:17:34.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.444 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:34.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:34.444 00:17:34.444 --- 10.0.0.1 ping statistics --- 00:17:34.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.444 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=77338 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 77338 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 77338 ']' 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.444 06:03:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:34.703 [2024-07-11 06:03:50.443442] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:34.703 [2024-07-11 06:03:50.443606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.703 [2024-07-11 06:03:50.619800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.961 [2024-07-11 06:03:50.851486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.961 [2024-07-11 06:03:50.851560] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.961 [2024-07-11 06:03:50.851582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.961 [2024-07-11 06:03:50.851615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.961 [2024-07-11 06:03:50.851630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.961 [2024-07-11 06:03:50.851710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.219 [2024-07-11 06:03:51.033681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.478 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.736 [2024-07-11 06:03:51.612829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.736 [2024-07-11 06:03:51.628764] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.736 [2024-07-11 06:03:51.628976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.993 [2024-07-11 06:03:51.679114] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:35.993 malloc0 00:17:35.993 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.993 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:35.993 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=77379 00:17:35.993 06:03:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 77379 /var/tmp/bdevperf.sock 00:17:35.993 06:03:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 77379 ']' 00:17:35.993 06:03:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.993 06:03:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.993 06:03:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.993 06:03:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.993 06:03:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:35.993 [2024-07-11 06:03:51.821715] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:35.993 [2024-07-11 06:03:51.822142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77379 ] 00:17:36.251 [2024-07-11 06:03:51.986758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.509 [2024-07-11 06:03:52.206595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.509 [2024-07-11 06:03:52.374613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:37.093 06:03:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.093 06:03:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:17:37.094 06:03:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:37.094 [2024-07-11 06:03:52.965178] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.094 [2024-07-11 06:03:52.965353] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:37.352 TLSTESTn1 00:17:37.352 06:03:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:37.352 Running I/O for 10 seconds... 00:17:47.322 00:17:47.322 Latency(us) 00:17:47.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.322 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:47.322 Verification LBA range: start 0x0 length 0x2000 00:17:47.322 TLSTESTn1 : 10.02 3062.97 11.96 0.00 0.00 41705.75 8043.05 33602.09 00:17:47.322 =================================================================================================================== 00:17:47.322 Total : 3062.97 11.96 0.00 0.00 41705.75 8043.05 33602.09 00:17:47.322 0 00:17:47.322 06:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:47.322 06:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:47.322 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:17:47.322 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:17:47.322 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:47.580 nvmf_trace.0 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 77379 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 77379 ']' 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 77379 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77379 00:17:47.580 killing process with pid 77379 00:17:47.580 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.580 00:17:47.580 Latency(us) 00:17:47.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.580 =================================================================================================================== 00:17:47.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77379' 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 77379 00:17:47.580 [2024-07-11 06:04:03.370269] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.580 06:04:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 77379 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:48.954 rmmod nvme_tcp 00:17:48.954 rmmod nvme_fabrics 00:17:48.954 rmmod nvme_keyring 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 77338 ']' 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 77338 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 77338 ']' 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 77338 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77338 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:48.954 killing process with pid 77338 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77338' 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 77338 00:17:48.954 [2024-07-11 06:04:04.569141] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:48.954 06:04:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 77338 00:17:49.889 06:04:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:49.889 06:04:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:49.889 06:04:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:49.889 06:04:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:49.889 06:04:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:49.889 06:04:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.889 06:04:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.889 06:04:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.890 06:04:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:49.890 06:04:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:49.890 00:17:49.890 real 0m16.095s 00:17:49.890 user 0m22.914s 00:17:49.890 sys 0m5.442s 00:17:49.890 06:04:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:49.890 06:04:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:49.890 ************************************ 00:17:49.890 END TEST nvmf_fips 00:17:49.890 ************************************ 00:17:49.890 06:04:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:49.890 06:04:05 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:17:49.890 06:04:05 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:49.890 06:04:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:49.890 06:04:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.890 06:04:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:49.890 ************************************ 00:17:49.890 START TEST nvmf_fuzz 00:17:49.890 ************************************ 00:17:49.890 06:04:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:50.149 * Looking for test storage... 00:17:50.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:50.149 Cannot find device "nvmf_tgt_br" 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.149 Cannot find device "nvmf_tgt_br2" 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:50.149 Cannot find device "nvmf_tgt_br" 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:50.149 Cannot find device "nvmf_tgt_br2" 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:17:50.149 06:04:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:50.149 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:50.149 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:50.150 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:50.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:17:50.408 00:17:50.408 --- 10.0.0.2 ping statistics --- 00:17:50.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.408 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:50.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:17:50.408 00:17:50.408 --- 10.0.0.3 ping statistics --- 00:17:50.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.408 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:50.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:50.408 00:17:50.408 --- 10.0.0.1 ping statistics --- 00:17:50.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.408 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.408 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77735 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77735 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 77735 ']' 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.409 06:04:06 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.343 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.343 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:17:51.343 06:04:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.343 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.343 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.343 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.343 06:04:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:51.343 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.343 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.657 Malloc0 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:51.657 06:04:07 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:52.240 Shutting down the fuzz application 00:17:52.240 06:04:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:53.174 Shutting down the fuzz application 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.174 06:04:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.174 rmmod nvme_tcp 00:17:53.174 rmmod nvme_fabrics 00:17:53.174 rmmod nvme_keyring 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 77735 ']' 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 77735 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 77735 ']' 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 77735 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77735 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:53.174 killing process with pid 77735 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77735' 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 77735 00:17:53.174 06:04:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 77735 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:54.547 ************************************ 00:17:54.547 END TEST nvmf_fuzz 00:17:54.547 ************************************ 00:17:54.547 00:17:54.547 real 0m4.454s 00:17:54.547 user 0m5.304s 00:17:54.547 sys 0m0.798s 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.547 06:04:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:54.547 06:04:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:54.547 06:04:10 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:54.547 06:04:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:54.547 06:04:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.547 06:04:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:54.547 ************************************ 00:17:54.547 START TEST nvmf_multiconnection 00:17:54.547 ************************************ 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:54.547 * Looking for test storage... 00:17:54.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:54.547 Cannot find device "nvmf_tgt_br" 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.547 Cannot find device "nvmf_tgt_br2" 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:54.547 Cannot find device "nvmf_tgt_br" 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:17:54.547 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:54.807 Cannot find device "nvmf_tgt_br2" 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:54.807 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:55.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:17:55.065 00:17:55.065 --- 10.0.0.2 ping statistics --- 00:17:55.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.065 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:55.065 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:55.065 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:55.065 00:17:55.065 --- 10.0.0.3 ping statistics --- 00:17:55.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.065 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:55.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:55.065 00:17:55.065 --- 10.0.0.1 ping statistics --- 00:17:55.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.065 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=77946 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 77946 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 77946 ']' 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.065 06:04:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:55.065 [2024-07-11 06:04:10.945966] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:17:55.065 [2024-07-11 06:04:10.946182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.322 [2024-07-11 06:04:11.124133] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:55.579 [2024-07-11 06:04:11.363290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.579 [2024-07-11 06:04:11.363372] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.579 [2024-07-11 06:04:11.363400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.579 [2024-07-11 06:04:11.363417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.579 [2024-07-11 06:04:11.363431] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.579 [2024-07-11 06:04:11.363708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.579 [2024-07-11 06:04:11.364414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.579 [2024-07-11 06:04:11.364550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.579 [2024-07-11 06:04:11.364574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:55.836 [2024-07-11 06:04:11.536535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.095 [2024-07-11 06:04:11.887297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.095 Malloc1 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.095 06:04:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.095 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.095 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.095 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.095 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.095 [2024-07-11 06:04:12.007712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.095 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.095 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.095 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:56.095 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.095 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.352 Malloc2 00:17:56.352 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.352 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:56.352 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.352 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.353 Malloc3 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.353 Malloc4 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.353 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 Malloc5 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 Malloc6 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 Malloc7 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.611 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.870 Malloc8 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.870 Malloc9 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.870 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:57.129 Malloc10 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:57.129 Malloc11 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:57.129 06:04:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:57.387 06:04:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:57.387 06:04:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:17:57.387 06:04:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:57.387 06:04:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:57.387 06:04:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:17:59.287 06:04:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:59.287 06:04:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:59.287 06:04:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:17:59.287 06:04:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:59.287 06:04:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:59.288 06:04:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:17:59.288 06:04:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:59.288 06:04:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:17:59.546 06:04:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:59.546 06:04:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:17:59.546 06:04:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.546 06:04:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:59.546 06:04:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:01.446 06:04:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:03.972 06:04:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:05.873 06:04:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:07.787 06:04:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:07.787 06:04:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:07.787 06:04:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:18:07.787 06:04:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:07.787 06:04:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.787 06:04:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:07.787 06:04:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:07.787 06:04:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:08.053 06:04:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:08.053 06:04:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:08.053 06:04:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.053 06:04:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:08.053 06:04:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:09.986 06:04:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:09.986 06:04:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:09.986 06:04:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:18:09.986 06:04:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:09.986 06:04:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.986 06:04:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:09.986 06:04:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:09.986 06:04:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:10.245 06:04:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:10.245 06:04:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:10.245 06:04:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.245 06:04:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:10.245 06:04:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:12.149 06:04:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:12.149 06:04:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:12.149 06:04:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:18:12.149 06:04:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:12.149 06:04:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.149 06:04:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:12.149 06:04:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.149 06:04:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:12.408 06:04:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:12.408 06:04:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:12.408 06:04:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.408 06:04:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:12.408 06:04:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:14.315 06:04:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:14.315 06:04:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:14.315 06:04:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:18:14.315 06:04:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:14.315 06:04:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.315 06:04:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:14.315 06:04:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.315 06:04:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:14.573 06:04:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:14.573 06:04:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:14.573 06:04:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.573 06:04:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:14.573 06:04:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:16.475 06:04:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:16.475 06:04:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:16.475 06:04:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:18:16.475 06:04:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:16.475 06:04:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.475 06:04:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:16.475 06:04:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.475 06:04:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:16.734 06:04:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:16.734 06:04:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:16.734 06:04:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.734 06:04:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:16.734 06:04:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:18.632 06:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:18.632 06:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:18.632 06:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:18:18.632 06:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:18.632 06:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.632 06:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:18.632 06:04:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.632 06:04:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:18.890 06:04:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:18.890 06:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:18:18.890 06:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.890 06:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:18.890 06:04:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:18:20.788 06:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:20.788 06:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:20.788 06:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:18:20.788 06:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:20.788 06:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.788 06:04:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:18:20.788 06:04:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:21.046 [global] 00:18:21.046 thread=1 00:18:21.046 invalidate=1 00:18:21.046 rw=read 00:18:21.046 time_based=1 00:18:21.046 runtime=10 00:18:21.046 ioengine=libaio 00:18:21.046 direct=1 00:18:21.046 bs=262144 00:18:21.046 iodepth=64 00:18:21.046 norandommap=1 00:18:21.046 numjobs=1 00:18:21.046 00:18:21.046 [job0] 00:18:21.046 filename=/dev/nvme0n1 00:18:21.046 [job1] 00:18:21.046 filename=/dev/nvme10n1 00:18:21.046 [job2] 00:18:21.046 filename=/dev/nvme1n1 00:18:21.046 [job3] 00:18:21.046 filename=/dev/nvme2n1 00:18:21.046 [job4] 00:18:21.046 filename=/dev/nvme3n1 00:18:21.046 [job5] 00:18:21.046 filename=/dev/nvme4n1 00:18:21.046 [job6] 00:18:21.046 filename=/dev/nvme5n1 00:18:21.046 [job7] 00:18:21.046 filename=/dev/nvme6n1 00:18:21.046 [job8] 00:18:21.046 filename=/dev/nvme7n1 00:18:21.046 [job9] 00:18:21.046 filename=/dev/nvme8n1 00:18:21.046 [job10] 00:18:21.046 filename=/dev/nvme9n1 00:18:21.046 Could not set queue depth (nvme0n1) 00:18:21.046 Could not set queue depth (nvme10n1) 00:18:21.046 Could not set queue depth (nvme1n1) 00:18:21.046 Could not set queue depth (nvme2n1) 00:18:21.046 Could not set queue depth (nvme3n1) 00:18:21.046 Could not set queue depth (nvme4n1) 00:18:21.046 Could not set queue depth (nvme5n1) 00:18:21.046 Could not set queue depth (nvme6n1) 00:18:21.046 Could not set queue depth (nvme7n1) 00:18:21.046 Could not set queue depth (nvme8n1) 00:18:21.046 Could not set queue depth (nvme9n1) 00:18:21.305 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:21.305 fio-3.35 00:18:21.305 Starting 11 threads 00:18:33.507 00:18:33.507 job0: (groupid=0, jobs=1): err= 0: pid=78401: Thu Jul 11 06:04:47 2024 00:18:33.507 read: IOPS=558, BW=140MiB/s (146MB/s)(1411MiB/10103msec) 00:18:33.507 slat (usec): min=21, max=50854, avg=1750.73, stdev=3911.47 00:18:33.507 clat (msec): min=15, max=243, avg=112.68, stdev=20.41 00:18:33.507 lat (msec): min=15, max=243, avg=114.43, stdev=20.65 00:18:33.507 clat percentiles (msec): 00:18:33.507 | 1.00th=[ 56], 5.00th=[ 90], 10.00th=[ 94], 20.00th=[ 97], 00:18:33.507 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 113], 00:18:33.507 | 70.00th=[ 126], 80.00th=[ 134], 90.00th=[ 140], 95.00th=[ 144], 00:18:33.507 | 99.00th=[ 155], 99.50th=[ 190], 99.90th=[ 236], 99.95th=[ 236], 00:18:33.507 | 99.99th=[ 245] 00:18:33.507 bw ( KiB/s): min=115200, max=165888, per=7.70%, avg=142816.85, stdev=19582.03, samples=20 00:18:33.507 iops : min= 450, max= 648, avg=557.80, stdev=76.44, samples=20 00:18:33.507 lat (msec) : 20=0.18%, 100=28.55%, 250=71.27% 00:18:33.507 cpu : usr=0.33%, sys=2.10%, ctx=1325, majf=0, minf=4097 00:18:33.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:33.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.507 issued rwts: total=5643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.507 job1: (groupid=0, jobs=1): err= 0: pid=78402: Thu Jul 11 06:04:47 2024 00:18:33.507 read: IOPS=611, BW=153MiB/s (160MB/s)(1537MiB/10055msec) 00:18:33.507 slat (usec): min=17, max=56345, avg=1623.05, stdev=3551.41 00:18:33.507 clat (msec): min=44, max=147, avg=103.00, stdev= 9.47 00:18:33.507 lat (msec): min=56, max=147, avg=104.62, stdev= 9.48 00:18:33.507 clat percentiles (msec): 00:18:33.507 | 1.00th=[ 81], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 96], 00:18:33.507 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 105], 00:18:33.507 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 115], 95.00th=[ 118], 00:18:33.507 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 140], 99.95th=[ 148], 00:18:33.507 | 99.99th=[ 148] 00:18:33.507 bw ( KiB/s): min=137728, max=161792, per=8.40%, avg=155677.30, stdev=5320.13, samples=20 00:18:33.507 iops : min= 538, max= 632, avg=607.90, stdev=20.70, samples=20 00:18:33.507 lat (msec) : 50=0.02%, 100=37.06%, 250=62.92% 00:18:33.507 cpu : usr=0.44%, sys=2.46%, ctx=1413, majf=0, minf=4097 00:18:33.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:33.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.507 issued rwts: total=6146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.507 job2: (groupid=0, jobs=1): err= 0: pid=78403: Thu Jul 11 06:04:47 2024 00:18:33.507 read: IOPS=614, BW=154MiB/s (161MB/s)(1544MiB/10053msec) 00:18:33.507 slat (usec): min=19, max=28058, avg=1615.65, stdev=3501.76 00:18:33.507 clat (msec): min=30, max=154, avg=102.50, stdev=10.21 00:18:33.507 lat (msec): min=30, max=154, avg=104.11, stdev=10.25 00:18:33.507 clat percentiles (msec): 00:18:33.507 | 1.00th=[ 77], 5.00th=[ 88], 10.00th=[ 92], 20.00th=[ 95], 00:18:33.507 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 105], 00:18:33.507 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 115], 95.00th=[ 118], 00:18:33.507 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 142], 99.95th=[ 148], 00:18:33.507 | 99.99th=[ 155] 00:18:33.507 bw ( KiB/s): min=149504, max=162304, per=8.43%, avg=156379.20, stdev=3687.62, samples=20 00:18:33.507 iops : min= 584, max= 634, avg=610.60, stdev=14.36, samples=20 00:18:33.507 lat (msec) : 50=0.29%, 100=39.41%, 250=60.30% 00:18:33.507 cpu : usr=0.29%, sys=2.08%, ctx=1469, majf=0, minf=4097 00:18:33.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:33.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.507 issued rwts: total=6174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.507 job3: (groupid=0, jobs=1): err= 0: pid=78404: Thu Jul 11 06:04:47 2024 00:18:33.507 read: IOPS=617, BW=154MiB/s (162MB/s)(1556MiB/10077msec) 00:18:33.507 slat (usec): min=18, max=42167, avg=1601.80, stdev=3521.44 00:18:33.507 clat (msec): min=40, max=176, avg=101.88, stdev=10.43 00:18:33.508 lat (msec): min=40, max=176, avg=103.48, stdev=10.48 00:18:33.508 clat percentiles (msec): 00:18:33.508 | 1.00th=[ 78], 5.00th=[ 87], 10.00th=[ 91], 20.00th=[ 94], 00:18:33.508 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 105], 00:18:33.508 | 70.00th=[ 107], 80.00th=[ 110], 90.00th=[ 114], 95.00th=[ 117], 00:18:33.508 | 99.00th=[ 131], 99.50th=[ 140], 99.90th=[ 169], 99.95th=[ 176], 00:18:33.508 | 99.99th=[ 176] 00:18:33.508 bw ( KiB/s): min=151552, max=163840, per=8.50%, avg=157696.40, stdev=3395.17, samples=20 00:18:33.508 iops : min= 592, max= 640, avg=615.95, stdev=13.27, samples=20 00:18:33.508 lat (msec) : 50=0.19%, 100=43.08%, 250=56.73% 00:18:33.508 cpu : usr=0.39%, sys=2.80%, ctx=1442, majf=0, minf=4097 00:18:33.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:33.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.508 issued rwts: total=6223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.508 job4: (groupid=0, jobs=1): err= 0: pid=78405: Thu Jul 11 06:04:47 2024 00:18:33.508 read: IOPS=618, BW=155MiB/s (162MB/s)(1557MiB/10075msec) 00:18:33.508 slat (usec): min=20, max=40963, avg=1595.78, stdev=3524.93 00:18:33.508 clat (msec): min=25, max=174, avg=101.79, stdev=11.25 00:18:33.508 lat (msec): min=25, max=174, avg=103.39, stdev=11.32 00:18:33.508 clat percentiles (msec): 00:18:33.508 | 1.00th=[ 80], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 95], 00:18:33.508 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 104], 00:18:33.508 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 114], 95.00th=[ 118], 00:18:33.508 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 169], 99.95th=[ 169], 00:18:33.508 | 99.99th=[ 176] 00:18:33.508 bw ( KiB/s): min=148992, max=164864, per=8.51%, avg=157766.45, stdev=4259.59, samples=20 00:18:33.508 iops : min= 582, max= 644, avg=616.20, stdev=16.61, samples=20 00:18:33.508 lat (msec) : 50=0.75%, 100=43.17%, 250=56.08% 00:18:33.508 cpu : usr=0.28%, sys=2.27%, ctx=1412, majf=0, minf=4097 00:18:33.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:33.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.508 issued rwts: total=6227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.508 job5: (groupid=0, jobs=1): err= 0: pid=78406: Thu Jul 11 06:04:47 2024 00:18:33.508 read: IOPS=545, BW=136MiB/s (143MB/s)(1378MiB/10100msec) 00:18:33.508 slat (usec): min=21, max=32943, avg=1809.07, stdev=3881.12 00:18:33.508 clat (msec): min=46, max=237, avg=115.30, stdev=19.47 00:18:33.508 lat (msec): min=46, max=237, avg=117.11, stdev=19.71 00:18:33.508 clat percentiles (msec): 00:18:33.508 | 1.00th=[ 82], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 100], 00:18:33.508 | 30.00th=[ 103], 40.00th=[ 107], 50.00th=[ 111], 60.00th=[ 117], 00:18:33.508 | 70.00th=[ 129], 80.00th=[ 136], 90.00th=[ 140], 95.00th=[ 146], 00:18:33.508 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 226], 99.95th=[ 226], 00:18:33.508 | 99.99th=[ 239] 00:18:33.508 bw ( KiB/s): min=112640, max=163001, per=7.52%, avg=139472.20, stdev=18214.82, samples=20 00:18:33.508 iops : min= 440, max= 636, avg=544.60, stdev=71.16, samples=20 00:18:33.508 lat (msec) : 50=0.09%, 100=23.82%, 250=76.09% 00:18:33.508 cpu : usr=0.35%, sys=2.03%, ctx=1323, majf=0, minf=4097 00:18:33.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:33.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.508 issued rwts: total=5512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.508 job6: (groupid=0, jobs=1): err= 0: pid=78407: Thu Jul 11 06:04:47 2024 00:18:33.508 read: IOPS=547, BW=137MiB/s (144MB/s)(1384MiB/10099msec) 00:18:33.508 slat (usec): min=20, max=44709, avg=1801.61, stdev=3927.77 00:18:33.508 clat (msec): min=41, max=234, avg=114.87, stdev=18.98 00:18:33.508 lat (msec): min=42, max=241, avg=116.67, stdev=19.25 00:18:33.508 clat percentiles (msec): 00:18:33.508 | 1.00th=[ 84], 5.00th=[ 91], 10.00th=[ 95], 20.00th=[ 100], 00:18:33.508 | 30.00th=[ 104], 40.00th=[ 107], 50.00th=[ 111], 60.00th=[ 116], 00:18:33.508 | 70.00th=[ 128], 80.00th=[ 134], 90.00th=[ 140], 95.00th=[ 144], 00:18:33.508 | 99.00th=[ 155], 99.50th=[ 171], 99.90th=[ 228], 99.95th=[ 228], 00:18:33.508 | 99.99th=[ 236] 00:18:33.508 bw ( KiB/s): min=115200, max=161469, per=7.55%, avg=140064.55, stdev=17946.65, samples=20 00:18:33.508 iops : min= 450, max= 630, avg=546.95, stdev=70.14, samples=20 00:18:33.508 lat (msec) : 50=0.49%, 100=21.49%, 250=78.03% 00:18:33.508 cpu : usr=0.41%, sys=2.36%, ctx=1267, majf=0, minf=4097 00:18:33.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:33.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.508 issued rwts: total=5534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.508 job7: (groupid=0, jobs=1): err= 0: pid=78408: Thu Jul 11 06:04:47 2024 00:18:33.508 read: IOPS=1196, BW=299MiB/s (314MB/s)(2996MiB/10012msec) 00:18:33.508 slat (usec): min=16, max=25043, avg=830.46, stdev=1948.05 00:18:33.508 clat (msec): min=9, max=111, avg=52.59, stdev=16.90 00:18:33.508 lat (msec): min=12, max=111, avg=53.42, stdev=17.14 00:18:33.508 clat percentiles (msec): 00:18:33.508 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 00:18:33.508 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 41], 60.00th=[ 66], 00:18:33.508 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 72], 95.00th=[ 75], 00:18:33.508 | 99.00th=[ 87], 99.50th=[ 97], 99.90th=[ 109], 99.95th=[ 111], 00:18:33.508 | 99.99th=[ 111] 00:18:33.508 bw ( KiB/s): min=182272, max=441715, per=16.46%, avg=305269.75, stdev=99003.22, samples=20 00:18:33.508 iops : min= 712, max= 1725, avg=1192.35, stdev=386.71, samples=20 00:18:33.508 lat (msec) : 10=0.01%, 20=0.16%, 50=52.27%, 100=47.13%, 250=0.44% 00:18:33.508 cpu : usr=0.43%, sys=3.73%, ctx=2541, majf=0, minf=4097 00:18:33.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:33.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.508 issued rwts: total=11983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.508 job8: (groupid=0, jobs=1): err= 0: pid=78409: Thu Jul 11 06:04:47 2024 00:18:33.508 read: IOPS=793, BW=198MiB/s (208MB/s)(1999MiB/10078msec) 00:18:33.508 slat (usec): min=20, max=73938, avg=1226.79, stdev=2969.63 00:18:33.508 clat (msec): min=20, max=167, avg=79.26, stdev=18.53 00:18:33.508 lat (msec): min=20, max=169, avg=80.49, stdev=18.68 00:18:33.508 clat percentiles (msec): 00:18:33.508 | 1.00th=[ 56], 5.00th=[ 63], 10.00th=[ 65], 20.00th=[ 67], 00:18:33.508 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 71], 60.00th=[ 74], 00:18:33.508 | 70.00th=[ 83], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 114], 00:18:33.508 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 163], 00:18:33.508 | 99.99th=[ 169] 00:18:33.508 bw ( KiB/s): min=150016, max=242176, per=10.95%, avg=203062.90, stdev=37010.79, samples=20 00:18:33.508 iops : min= 586, max= 946, avg=793.20, stdev=144.56, samples=20 00:18:33.508 lat (msec) : 50=0.50%, 100=82.86%, 250=16.64% 00:18:33.508 cpu : usr=0.33%, sys=2.66%, ctx=1739, majf=0, minf=4097 00:18:33.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:33.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.508 issued rwts: total=7997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.508 job9: (groupid=0, jobs=1): err= 0: pid=78410: Thu Jul 11 06:04:47 2024 00:18:33.508 read: IOPS=553, BW=138MiB/s (145MB/s)(1397MiB/10099msec) 00:18:33.508 slat (usec): min=19, max=31116, avg=1775.08, stdev=3784.19 00:18:33.508 clat (msec): min=10, max=236, avg=113.73, stdev=20.30 00:18:33.508 lat (msec): min=11, max=241, avg=115.50, stdev=20.60 00:18:33.508 clat percentiles (msec): 00:18:33.508 | 1.00th=[ 66], 5.00th=[ 90], 10.00th=[ 94], 20.00th=[ 99], 00:18:33.508 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 115], 00:18:33.508 | 70.00th=[ 129], 80.00th=[ 134], 90.00th=[ 140], 95.00th=[ 144], 00:18:33.508 | 99.00th=[ 155], 99.50th=[ 178], 99.90th=[ 228], 99.95th=[ 228], 00:18:33.508 | 99.99th=[ 236] 00:18:33.508 bw ( KiB/s): min=114176, max=168448, per=7.63%, avg=141467.40, stdev=19748.55, samples=20 00:18:33.508 iops : min= 446, max= 658, avg=552.55, stdev=77.11, samples=20 00:18:33.508 lat (msec) : 20=0.05%, 50=0.50%, 100=25.01%, 250=74.43% 00:18:33.508 cpu : usr=0.22%, sys=1.84%, ctx=1375, majf=0, minf=4097 00:18:33.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:33.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.508 issued rwts: total=5589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.508 job10: (groupid=0, jobs=1): err= 0: pid=78411: Thu Jul 11 06:04:47 2024 00:18:33.508 read: IOPS=610, BW=153MiB/s (160MB/s)(1537MiB/10064msec) 00:18:33.508 slat (usec): min=21, max=66858, avg=1622.79, stdev=3685.87 00:18:33.508 clat (msec): min=57, max=161, avg=103.06, stdev= 9.81 00:18:33.508 lat (msec): min=67, max=161, avg=104.69, stdev= 9.83 00:18:33.508 clat percentiles (msec): 00:18:33.508 | 1.00th=[ 80], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 96], 00:18:33.508 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 105], 00:18:33.508 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 115], 95.00th=[ 121], 00:18:33.508 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 159], 99.95th=[ 159], 00:18:33.508 | 99.99th=[ 161] 00:18:33.508 bw ( KiB/s): min=142621, max=166067, per=8.40%, avg=155706.90, stdev=5327.06, samples=20 00:18:33.508 iops : min= 557, max= 648, avg=608.15, stdev=20.76, samples=20 00:18:33.508 lat (msec) : 100=36.85%, 250=63.15% 00:18:33.508 cpu : usr=0.28%, sys=2.33%, ctx=1365, majf=0, minf=4097 00:18:33.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:33.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:33.508 issued rwts: total=6146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.508 00:18:33.508 Run status group 0 (all jobs): 00:18:33.508 READ: bw=1811MiB/s (1899MB/s), 136MiB/s-299MiB/s (143MB/s-314MB/s), io=17.9GiB (19.2GB), run=10012-10103msec 00:18:33.508 00:18:33.508 Disk stats (read/write): 00:18:33.509 nvme0n1: ios=11186/0, merge=0/0, ticks=1231054/0, in_queue=1231054, util=97.85% 00:18:33.509 nvme10n1: ios=12167/0, merge=0/0, ticks=1234951/0, in_queue=1234951, util=97.88% 00:18:33.509 nvme1n1: ios=12225/0, merge=0/0, ticks=1234196/0, in_queue=1234196, util=98.03% 00:18:33.509 nvme2n1: ios=12339/0, merge=0/0, ticks=1235028/0, in_queue=1235028, util=98.22% 00:18:33.509 nvme3n1: ios=12343/0, merge=0/0, ticks=1233990/0, in_queue=1233990, util=98.26% 00:18:33.509 nvme4n1: ios=10904/0, merge=0/0, ticks=1229619/0, in_queue=1229619, util=98.47% 00:18:33.509 nvme5n1: ios=10956/0, merge=0/0, ticks=1231775/0, in_queue=1231775, util=98.52% 00:18:33.509 nvme6n1: ios=23886/0, merge=0/0, ticks=1242980/0, in_queue=1242980, util=98.62% 00:18:33.509 nvme7n1: ios=15876/0, merge=0/0, ticks=1236665/0, in_queue=1236665, util=98.93% 00:18:33.509 nvme8n1: ios=11073/0, merge=0/0, ticks=1233121/0, in_queue=1233121, util=99.08% 00:18:33.509 nvme9n1: ios=12188/0, merge=0/0, ticks=1235326/0, in_queue=1235326, util=99.19% 00:18:33.509 06:04:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:33.509 [global] 00:18:33.509 thread=1 00:18:33.509 invalidate=1 00:18:33.509 rw=randwrite 00:18:33.509 time_based=1 00:18:33.509 runtime=10 00:18:33.509 ioengine=libaio 00:18:33.509 direct=1 00:18:33.509 bs=262144 00:18:33.509 iodepth=64 00:18:33.509 norandommap=1 00:18:33.509 numjobs=1 00:18:33.509 00:18:33.509 [job0] 00:18:33.509 filename=/dev/nvme0n1 00:18:33.509 [job1] 00:18:33.509 filename=/dev/nvme10n1 00:18:33.509 [job2] 00:18:33.509 filename=/dev/nvme1n1 00:18:33.509 [job3] 00:18:33.509 filename=/dev/nvme2n1 00:18:33.509 [job4] 00:18:33.509 filename=/dev/nvme3n1 00:18:33.509 [job5] 00:18:33.509 filename=/dev/nvme4n1 00:18:33.509 [job6] 00:18:33.509 filename=/dev/nvme5n1 00:18:33.509 [job7] 00:18:33.509 filename=/dev/nvme6n1 00:18:33.509 [job8] 00:18:33.509 filename=/dev/nvme7n1 00:18:33.509 [job9] 00:18:33.509 filename=/dev/nvme8n1 00:18:33.509 [job10] 00:18:33.509 filename=/dev/nvme9n1 00:18:33.509 Could not set queue depth (nvme0n1) 00:18:33.509 Could not set queue depth (nvme10n1) 00:18:33.509 Could not set queue depth (nvme1n1) 00:18:33.509 Could not set queue depth (nvme2n1) 00:18:33.509 Could not set queue depth (nvme3n1) 00:18:33.509 Could not set queue depth (nvme4n1) 00:18:33.509 Could not set queue depth (nvme5n1) 00:18:33.509 Could not set queue depth (nvme6n1) 00:18:33.509 Could not set queue depth (nvme7n1) 00:18:33.509 Could not set queue depth (nvme8n1) 00:18:33.509 Could not set queue depth (nvme9n1) 00:18:33.509 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:33.509 fio-3.35 00:18:33.509 Starting 11 threads 00:18:43.574 00:18:43.574 job0: (groupid=0, jobs=1): err= 0: pid=78614: Thu Jul 11 06:04:58 2024 00:18:43.574 write: IOPS=358, BW=89.7MiB/s (94.1MB/s)(912MiB/10168msec); 0 zone resets 00:18:43.574 slat (usec): min=16, max=28186, avg=2736.24, stdev=4732.26 00:18:43.574 clat (msec): min=21, max=339, avg=175.52, stdev=19.06 00:18:43.574 lat (msec): min=21, max=339, avg=178.26, stdev=18.73 00:18:43.574 clat percentiles (msec): 00:18:43.574 | 1.00th=[ 86], 5.00th=[ 167], 10.00th=[ 167], 20.00th=[ 169], 00:18:43.574 | 30.00th=[ 178], 40.00th=[ 178], 50.00th=[ 178], 60.00th=[ 180], 00:18:43.574 | 70.00th=[ 180], 80.00th=[ 180], 90.00th=[ 182], 95.00th=[ 182], 00:18:43.574 | 99.00th=[ 239], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 338], 00:18:43.574 | 99.99th=[ 338] 00:18:43.574 bw ( KiB/s): min=90112, max=94208, per=7.04%, avg=91766.80, stdev=1085.06, samples=20 00:18:43.574 iops : min= 352, max= 368, avg=358.45, stdev= 4.24, samples=20 00:18:43.574 lat (msec) : 50=0.55%, 100=0.66%, 250=97.97%, 500=0.82% 00:18:43.574 cpu : usr=0.69%, sys=1.06%, ctx=2604, majf=0, minf=1 00:18:43.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:43.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.574 issued rwts: total=0,3649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.574 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.574 job1: (groupid=0, jobs=1): err= 0: pid=78615: Thu Jul 11 06:04:58 2024 00:18:43.574 write: IOPS=445, BW=111MiB/s (117MB/s)(1127MiB/10129msec); 0 zone resets 00:18:43.574 slat (usec): min=18, max=78048, avg=2212.22, stdev=3938.24 00:18:43.574 clat (msec): min=84, max=266, avg=141.51, stdev=12.20 00:18:43.574 lat (msec): min=84, max=266, avg=143.73, stdev=11.69 00:18:43.574 clat percentiles (msec): 00:18:43.574 | 1.00th=[ 129], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 138], 00:18:43.574 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 140], 60.00th=[ 142], 00:18:43.574 | 70.00th=[ 142], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 159], 00:18:43.574 | 99.00th=[ 199], 99.50th=[ 228], 99.90th=[ 259], 99.95th=[ 259], 00:18:43.574 | 99.99th=[ 268] 00:18:43.574 bw ( KiB/s): min=90112, max=118784, per=8.73%, avg=113780.30, stdev=6432.19, samples=20 00:18:43.574 iops : min= 352, max= 464, avg=444.45, stdev=25.12, samples=20 00:18:43.574 lat (msec) : 100=0.18%, 250=99.69%, 500=0.13% 00:18:43.574 cpu : usr=0.70%, sys=1.39%, ctx=5330, majf=0, minf=1 00:18:43.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:43.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.574 issued rwts: total=0,4508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.574 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.574 job2: (groupid=0, jobs=1): err= 0: pid=78627: Thu Jul 11 06:04:58 2024 00:18:43.574 write: IOPS=359, BW=89.9MiB/s (94.3MB/s)(914MiB/10168msec); 0 zone resets 00:18:43.574 slat (usec): min=17, max=15556, avg=2733.03, stdev=4714.90 00:18:43.574 clat (msec): min=18, max=336, avg=175.17, stdev=19.08 00:18:43.574 lat (msec): min=18, max=336, avg=177.91, stdev=18.76 00:18:43.574 clat percentiles (msec): 00:18:43.574 | 1.00th=[ 86], 5.00th=[ 167], 10.00th=[ 167], 20.00th=[ 169], 00:18:43.574 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 178], 60.00th=[ 180], 00:18:43.574 | 70.00th=[ 180], 80.00th=[ 180], 90.00th=[ 182], 95.00th=[ 182], 00:18:43.574 | 99.00th=[ 236], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 338], 00:18:43.574 | 99.99th=[ 338] 00:18:43.574 bw ( KiB/s): min=90112, max=96256, per=7.05%, avg=91955.20, stdev=1470.84, samples=20 00:18:43.574 iops : min= 352, max= 376, avg=359.20, stdev= 5.75, samples=20 00:18:43.574 lat (msec) : 20=0.11%, 50=0.44%, 100=0.66%, 250=97.98%, 500=0.82% 00:18:43.574 cpu : usr=0.63%, sys=0.86%, ctx=5024, majf=0, minf=1 00:18:43.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:43.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.574 issued rwts: total=0,3656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.574 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.574 job3: (groupid=0, jobs=1): err= 0: pid=78628: Thu Jul 11 06:04:58 2024 00:18:43.574 write: IOPS=947, BW=237MiB/s (248MB/s)(2384MiB/10060msec); 0 zone resets 00:18:43.574 slat (usec): min=15, max=47921, avg=1043.97, stdev=1827.02 00:18:43.574 clat (msec): min=49, max=145, avg=66.47, stdev= 8.84 00:18:43.574 lat (msec): min=49, max=145, avg=67.51, stdev= 8.80 00:18:43.574 clat percentiles (msec): 00:18:43.574 | 1.00th=[ 61], 5.00th=[ 62], 10.00th=[ 62], 20.00th=[ 63], 00:18:43.574 | 30.00th=[ 65], 40.00th=[ 65], 50.00th=[ 66], 60.00th=[ 66], 00:18:43.574 | 70.00th=[ 66], 80.00th=[ 67], 90.00th=[ 67], 95.00th=[ 81], 00:18:43.574 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 133], 99.95th=[ 140], 00:18:43.574 | 99.99th=[ 146] 00:18:43.574 bw ( KiB/s): min=141595, max=252928, per=18.60%, avg=242471.75, stdev=25700.80, samples=20 00:18:43.574 iops : min= 553, max= 988, avg=947.10, stdev=100.40, samples=20 00:18:43.574 lat (msec) : 50=0.04%, 100=96.53%, 250=3.43% 00:18:43.574 cpu : usr=1.35%, sys=2.23%, ctx=11506, majf=0, minf=1 00:18:43.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:43.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.574 issued rwts: total=0,9534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.574 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.574 job4: (groupid=0, jobs=1): err= 0: pid=78629: Thu Jul 11 06:04:58 2024 00:18:43.574 write: IOPS=464, BW=116MiB/s (122MB/s)(1176MiB/10131msec); 0 zone resets 00:18:43.574 slat (usec): min=17, max=45552, avg=2121.17, stdev=3759.01 00:18:43.574 clat (msec): min=8, max=269, avg=135.64, stdev=17.34 00:18:43.574 lat (msec): min=8, max=269, avg=137.76, stdev=17.20 00:18:43.574 clat percentiles (msec): 00:18:43.574 | 1.00th=[ 74], 5.00th=[ 108], 10.00th=[ 122], 20.00th=[ 131], 00:18:43.574 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 140], 00:18:43.574 | 70.00th=[ 140], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 150], 00:18:43.574 | 99.00th=[ 171], 99.50th=[ 213], 99.90th=[ 262], 99.95th=[ 262], 00:18:43.574 | 99.99th=[ 271] 00:18:43.574 bw ( KiB/s): min=106496, max=155648, per=9.11%, avg=118823.30, stdev=9520.92, samples=20 00:18:43.574 iops : min= 416, max= 608, avg=464.15, stdev=37.19, samples=20 00:18:43.574 lat (msec) : 10=0.04%, 20=0.09%, 50=0.43%, 100=2.66%, 250=96.58% 00:18:43.574 lat (msec) : 500=0.21% 00:18:43.574 cpu : usr=0.71%, sys=1.35%, ctx=4095, majf=0, minf=1 00:18:43.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:43.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.574 issued rwts: total=0,4705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.574 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.574 job5: (groupid=0, jobs=1): err= 0: pid=78631: Thu Jul 11 06:04:58 2024 00:18:43.574 write: IOPS=461, BW=115MiB/s (121MB/s)(1168MiB/10124msec); 0 zone resets 00:18:43.574 slat (usec): min=17, max=54194, avg=2107.42, stdev=3788.42 00:18:43.574 clat (msec): min=13, max=261, avg=136.49, stdev=16.12 00:18:43.574 lat (msec): min=13, max=261, avg=138.60, stdev=15.95 00:18:43.574 clat percentiles (msec): 00:18:43.574 | 1.00th=[ 72], 5.00th=[ 112], 10.00th=[ 129], 20.00th=[ 132], 00:18:43.574 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 140], 00:18:43.574 | 70.00th=[ 142], 80.00th=[ 142], 90.00th=[ 146], 95.00th=[ 150], 00:18:43.574 | 99.00th=[ 174], 99.50th=[ 211], 99.90th=[ 253], 99.95th=[ 253], 00:18:43.574 | 99.99th=[ 262] 00:18:43.574 bw ( KiB/s): min=106496, max=139264, per=9.05%, avg=118029.50, stdev=7187.79, samples=20 00:18:43.574 iops : min= 416, max= 544, avg=461.05, stdev=28.07, samples=20 00:18:43.574 lat (msec) : 20=0.17%, 50=0.43%, 100=1.35%, 250=97.92%, 500=0.13% 00:18:43.574 cpu : usr=0.82%, sys=1.39%, ctx=2335, majf=0, minf=1 00:18:43.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:43.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.574 issued rwts: total=0,4673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.574 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.574 job6: (groupid=0, jobs=1): err= 0: pid=78632: Thu Jul 11 06:04:58 2024 00:18:43.574 write: IOPS=448, BW=112MiB/s (118MB/s)(1136MiB/10129msec); 0 zone resets 00:18:43.574 slat (usec): min=17, max=17870, avg=2194.98, stdev=3780.65 00:18:43.574 clat (msec): min=21, max=266, avg=140.40, stdev=14.05 00:18:43.574 lat (msec): min=21, max=266, avg=142.60, stdev=13.73 00:18:43.574 clat percentiles (msec): 00:18:43.574 | 1.00th=[ 100], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 136], 00:18:43.574 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 140], 60.00th=[ 142], 00:18:43.574 | 70.00th=[ 142], 80.00th=[ 144], 90.00th=[ 150], 95.00th=[ 157], 00:18:43.575 | 99.00th=[ 171], 99.50th=[ 215], 99.90th=[ 259], 99.95th=[ 259], 00:18:43.575 | 99.99th=[ 268] 00:18:43.575 bw ( KiB/s): min=106496, max=118784, per=8.80%, avg=114713.60, stdev=3474.44, samples=20 00:18:43.575 iops : min= 416, max= 464, avg=448.10, stdev=13.57, samples=20 00:18:43.575 lat (msec) : 50=0.51%, 100=0.53%, 250=98.83%, 500=0.13% 00:18:43.575 cpu : usr=0.76%, sys=1.29%, ctx=3098, majf=0, minf=1 00:18:43.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:43.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.575 issued rwts: total=0,4544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.575 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.575 job7: (groupid=0, jobs=1): err= 0: pid=78633: Thu Jul 11 06:04:58 2024 00:18:43.575 write: IOPS=447, BW=112MiB/s (117MB/s)(1133MiB/10134msec); 0 zone resets 00:18:43.575 slat (usec): min=18, max=31772, avg=2200.73, stdev=3795.59 00:18:43.575 clat (msec): min=35, max=269, avg=140.81, stdev=13.01 00:18:43.575 lat (msec): min=35, max=269, avg=143.01, stdev=12.62 00:18:43.575 clat percentiles (msec): 00:18:43.575 | 1.00th=[ 128], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 136], 00:18:43.575 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 140], 60.00th=[ 142], 00:18:43.575 | 70.00th=[ 142], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 157], 00:18:43.575 | 99.00th=[ 174], 99.50th=[ 218], 99.90th=[ 262], 99.95th=[ 262], 00:18:43.575 | 99.99th=[ 271] 00:18:43.575 bw ( KiB/s): min=102400, max=118784, per=8.77%, avg=114408.60, stdev=4317.62, samples=20 00:18:43.575 iops : min= 400, max= 464, avg=446.90, stdev=16.86, samples=20 00:18:43.575 lat (msec) : 50=0.26%, 100=0.44%, 250=99.07%, 500=0.22% 00:18:43.575 cpu : usr=0.79%, sys=1.32%, ctx=5973, majf=0, minf=1 00:18:43.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:43.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.575 issued rwts: total=0,4533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.575 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.575 job8: (groupid=0, jobs=1): err= 0: pid=78634: Thu Jul 11 06:04:58 2024 00:18:43.575 write: IOPS=467, BW=117MiB/s (123MB/s)(1183MiB/10122msec); 0 zone resets 00:18:43.575 slat (usec): min=18, max=17102, avg=2091.48, stdev=3646.62 00:18:43.575 clat (msec): min=11, max=255, avg=134.76, stdev=17.75 00:18:43.575 lat (msec): min=11, max=255, avg=136.85, stdev=17.67 00:18:43.575 clat percentiles (msec): 00:18:43.575 | 1.00th=[ 64], 5.00th=[ 102], 10.00th=[ 108], 20.00th=[ 131], 00:18:43.575 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 140], 00:18:43.575 | 70.00th=[ 140], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 148], 00:18:43.575 | 99.00th=[ 169], 99.50th=[ 203], 99.90th=[ 247], 99.95th=[ 247], 00:18:43.575 | 99.99th=[ 255] 00:18:43.575 bw ( KiB/s): min=106496, max=160256, per=9.17%, avg=119502.80, stdev=11212.23, samples=20 00:18:43.575 iops : min= 416, max= 626, avg=466.80, stdev=43.80, samples=20 00:18:43.575 lat (msec) : 20=0.25%, 50=0.51%, 100=3.72%, 250=95.48%, 500=0.04% 00:18:43.575 cpu : usr=0.83%, sys=1.08%, ctx=4689, majf=0, minf=1 00:18:43.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:43.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.575 issued rwts: total=0,4732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.575 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.575 job9: (groupid=0, jobs=1): err= 0: pid=78635: Thu Jul 11 06:04:58 2024 00:18:43.575 write: IOPS=357, BW=89.4MiB/s (93.8MB/s)(909MiB/10168msec); 0 zone resets 00:18:43.575 slat (usec): min=16, max=48141, avg=2745.89, stdev=4770.17 00:18:43.575 clat (msec): min=49, max=336, avg=176.11, stdev=15.81 00:18:43.575 lat (msec): min=49, max=336, avg=178.85, stdev=15.29 00:18:43.575 clat percentiles (msec): 00:18:43.575 | 1.00th=[ 138], 5.00th=[ 167], 10.00th=[ 167], 20.00th=[ 169], 00:18:43.575 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 178], 60.00th=[ 180], 00:18:43.575 | 70.00th=[ 180], 80.00th=[ 180], 90.00th=[ 182], 95.00th=[ 182], 00:18:43.575 | 99.00th=[ 236], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 338], 00:18:43.575 | 99.99th=[ 338] 00:18:43.575 bw ( KiB/s): min=86016, max=94208, per=7.01%, avg=91459.80, stdev=1688.05, samples=20 00:18:43.575 iops : min= 336, max= 368, avg=357.25, stdev= 6.61, samples=20 00:18:43.575 lat (msec) : 50=0.11%, 100=0.55%, 250=98.52%, 500=0.82% 00:18:43.575 cpu : usr=0.67%, sys=1.07%, ctx=5838, majf=0, minf=1 00:18:43.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:43.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.575 issued rwts: total=0,3637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.575 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.575 job10: (groupid=0, jobs=1): err= 0: pid=78636: Thu Jul 11 06:04:58 2024 00:18:43.575 write: IOPS=355, BW=89.0MiB/s (93.3MB/s)(904MiB/10161msec); 0 zone resets 00:18:43.575 slat (usec): min=19, max=80022, avg=2760.82, stdev=4902.63 00:18:43.575 clat (msec): min=82, max=337, avg=177.00, stdev=14.39 00:18:43.575 lat (msec): min=82, max=337, avg=179.77, stdev=13.72 00:18:43.575 clat percentiles (msec): 00:18:43.575 | 1.00th=[ 165], 5.00th=[ 167], 10.00th=[ 167], 20.00th=[ 169], 00:18:43.575 | 30.00th=[ 178], 40.00th=[ 178], 50.00th=[ 178], 60.00th=[ 180], 00:18:43.575 | 70.00th=[ 180], 80.00th=[ 180], 90.00th=[ 182], 95.00th=[ 182], 00:18:43.575 | 99.00th=[ 243], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 338], 00:18:43.575 | 99.99th=[ 338] 00:18:43.575 bw ( KiB/s): min=77312, max=92487, per=6.98%, avg=90956.75, stdev=3330.75, samples=20 00:18:43.575 iops : min= 302, max= 361, avg=355.25, stdev=12.99, samples=20 00:18:43.575 lat (msec) : 100=0.19%, 250=98.98%, 500=0.83% 00:18:43.575 cpu : usr=0.48%, sys=1.03%, ctx=4221, majf=0, minf=1 00:18:43.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:43.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:43.575 issued rwts: total=0,3616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.575 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:43.575 00:18:43.575 Run status group 0 (all jobs): 00:18:43.575 WRITE: bw=1273MiB/s (1335MB/s), 89.0MiB/s-237MiB/s (93.3MB/s-248MB/s), io=12.6GiB (13.6GB), run=10060-10168msec 00:18:43.575 00:18:43.575 Disk stats (read/write): 00:18:43.575 nvme0n1: ios=50/7146, merge=0/0, ticks=60/1208002, in_queue=1208062, util=97.82% 00:18:43.575 nvme10n1: ios=49/8851, merge=0/0, ticks=60/1209283, in_queue=1209343, util=97.78% 00:18:43.575 nvme1n1: ios=46/7152, merge=0/0, ticks=33/1207691, in_queue=1207724, util=97.88% 00:18:43.575 nvme2n1: ios=30/18843, merge=0/0, ticks=36/1212148, in_queue=1212184, util=97.94% 00:18:43.575 nvme3n1: ios=36/9256, merge=0/0, ticks=96/1210747, in_queue=1210843, util=98.57% 00:18:43.575 nvme4n1: ios=20/9186, merge=0/0, ticks=78/1209910, in_queue=1209988, util=98.33% 00:18:43.575 nvme5n1: ios=0/8928, merge=0/0, ticks=0/1210319, in_queue=1210319, util=98.38% 00:18:43.575 nvme6n1: ios=0/8898, merge=0/0, ticks=0/1209098, in_queue=1209098, util=98.38% 00:18:43.575 nvme7n1: ios=0/9294, merge=0/0, ticks=0/1209387, in_queue=1209387, util=98.68% 00:18:43.575 nvme8n1: ios=0/7113, merge=0/0, ticks=0/1207874, in_queue=1207874, util=98.82% 00:18:43.575 nvme9n1: ios=0/7074, merge=0/0, ticks=0/1207587, in_queue=1207587, util=98.88% 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:43.575 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:43.575 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:43.575 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:43.576 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:43.576 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:43.576 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.576 06:04:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:43.576 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:43.576 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:43.576 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:43.576 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:43.576 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.577 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:43.577 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:43.577 06:04:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:43.577 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.577 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:18:43.577 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.577 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:18:43.577 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.577 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.577 rmmod nvme_tcp 00:18:43.577 rmmod nvme_fabrics 00:18:43.577 rmmod nvme_keyring 00:18:43.577 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 77946 ']' 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 77946 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 77946 ']' 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 77946 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77946 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:43.835 killing process with pid 77946 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77946' 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 77946 00:18:43.835 06:04:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 77946 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:47.123 ************************************ 00:18:47.123 END TEST nvmf_multiconnection 00:18:47.123 ************************************ 00:18:47.123 00:18:47.123 real 0m52.115s 00:18:47.123 user 2m53.178s 00:18:47.123 sys 0m31.779s 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.123 06:05:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.123 06:05:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:47.123 06:05:02 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:47.123 06:05:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:47.123 06:05:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.123 06:05:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.123 ************************************ 00:18:47.123 START TEST nvmf_initiator_timeout 00:18:47.123 ************************************ 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:47.123 * Looking for test storage... 00:18:47.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:47.123 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:47.124 Cannot find device "nvmf_tgt_br" 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:47.124 Cannot find device "nvmf_tgt_br2" 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:47.124 Cannot find device "nvmf_tgt_br" 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:47.124 Cannot find device "nvmf_tgt_br2" 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:47.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:47.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:47.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:18:47.124 00:18:47.124 --- 10.0.0.2 ping statistics --- 00:18:47.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.124 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:47.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:47.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:18:47.124 00:18:47.124 --- 10.0.0.3 ping statistics --- 00:18:47.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.124 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:47.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:47.124 00:18:47.124 --- 10.0.0.1 ping statistics --- 00:18:47.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.124 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.124 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=79026 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 79026 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 79026 ']' 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.125 06:05:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:47.125 [2024-07-11 06:05:03.040870] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:18:47.125 [2024-07-11 06:05:03.041043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.384 [2024-07-11 06:05:03.208942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:47.641 [2024-07-11 06:05:03.444625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.641 [2024-07-11 06:05:03.444722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.641 [2024-07-11 06:05:03.444754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.641 [2024-07-11 06:05:03.444773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.641 [2024-07-11 06:05:03.444789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.641 [2024-07-11 06:05:03.445037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.641 [2024-07-11 06:05:03.445667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.641 [2024-07-11 06:05:03.445772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.641 [2024-07-11 06:05:03.445783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:47.898 [2024-07-11 06:05:03.644864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:48.156 06:05:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.156 06:05:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:48.156 06:05:03 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.156 06:05:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.156 06:05:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:48.156 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.156 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:48.156 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.156 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.156 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:48.414 Malloc0 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:48.414 Delay0 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:48.414 [2024-07-11 06:05:04.097518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:48.414 [2024-07-11 06:05:04.129697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.414 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:48.415 06:05:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:18:50.945 06:05:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:50.945 06:05:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:50.945 06:05:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.945 06:05:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:50.945 06:05:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.945 06:05:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:18:50.945 06:05:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=79090 00:18:50.945 06:05:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:50.945 06:05:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:50.945 [global] 00:18:50.945 thread=1 00:18:50.945 invalidate=1 00:18:50.945 rw=write 00:18:50.945 time_based=1 00:18:50.945 runtime=60 00:18:50.945 ioengine=libaio 00:18:50.945 direct=1 00:18:50.945 bs=4096 00:18:50.945 iodepth=1 00:18:50.945 norandommap=0 00:18:50.945 numjobs=1 00:18:50.945 00:18:50.945 verify_dump=1 00:18:50.945 verify_backlog=512 00:18:50.945 verify_state_save=0 00:18:50.945 do_verify=1 00:18:50.945 verify=crc32c-intel 00:18:50.945 [job0] 00:18:50.945 filename=/dev/nvme0n1 00:18:50.945 Could not set queue depth (nvme0n1) 00:18:50.945 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.945 fio-3.35 00:18:50.945 Starting 1 thread 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:53.477 true 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:53.477 true 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:53.477 true 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:53.477 true 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.477 06:05:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:56.766 true 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:56.766 true 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:56.766 true 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:56.766 true 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:56.766 06:05:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 79090 00:19:53.016 00:19:53.016 job0: (groupid=0, jobs=1): err= 0: pid=79111: Thu Jul 11 06:06:06 2024 00:19:53.016 read: IOPS=674, BW=2697KiB/s (2761kB/s)(158MiB/60000msec) 00:19:53.016 slat (usec): min=11, max=140, avg=14.70, stdev= 4.08 00:19:53.016 clat (usec): min=194, max=825, avg=247.03, stdev=24.03 00:19:53.016 lat (usec): min=207, max=841, avg=261.73, stdev=24.71 00:19:53.016 clat percentiles (usec): 00:19:53.016 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:19:53.016 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:19:53.016 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 285], 00:19:53.016 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 379], 99.95th=[ 545], 00:19:53.016 | 99.99th=[ 766] 00:19:53.016 write: IOPS=679, BW=2719KiB/s (2785kB/s)(159MiB/60000msec); 0 zone resets 00:19:53.016 slat (usec): min=12, max=18379, avg=22.97, stdev=108.31 00:19:53.016 clat (usec): min=143, max=40570k, avg=1184.92, stdev=200877.62 00:19:53.016 lat (usec): min=162, max=40570k, avg=1207.89, stdev=200877.64 00:19:53.016 clat percentiles (usec): 00:19:53.016 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 169], 00:19:53.016 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:19:53.016 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 237], 00:19:53.016 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 343], 99.95th=[ 453], 00:19:53.016 | 99.99th=[ 717] 00:19:53.016 bw ( KiB/s): min= 808, max= 9208, per=100.00%, avg=8192.00, stdev=1402.73, samples=39 00:19:53.016 iops : min= 202, max= 2302, avg=2048.00, stdev=350.68, samples=39 00:19:53.016 lat (usec) : 250=79.80%, 500=20.15%, 750=0.03%, 1000=0.01% 00:19:53.016 lat (msec) : 2=0.01%, >=2000=0.01% 00:19:53.016 cpu : usr=0.55%, sys=1.91%, ctx=81245, majf=0, minf=2 00:19:53.016 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:53.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.016 issued rwts: total=40448,40789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.016 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:53.016 00:19:53.016 Run status group 0 (all jobs): 00:19:53.016 READ: bw=2697KiB/s (2761kB/s), 2697KiB/s-2697KiB/s (2761kB/s-2761kB/s), io=158MiB (166MB), run=60000-60000msec 00:19:53.016 WRITE: bw=2719KiB/s (2785kB/s), 2719KiB/s-2719KiB/s (2785kB/s-2785kB/s), io=159MiB (167MB), run=60000-60000msec 00:19:53.016 00:19:53.016 Disk stats (read/write): 00:19:53.016 nvme0n1: ios=40594/40448, merge=0/0, ticks=10310/8125, in_queue=18435, util=99.60% 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:53.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:53.016 nvmf hotplug test: fio successful as expected 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.016 rmmod nvme_tcp 00:19:53.016 rmmod nvme_fabrics 00:19:53.016 rmmod nvme_keyring 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 79026 ']' 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 79026 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 79026 ']' 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 79026 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79026 00:19:53.016 killing process with pid 79026 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79026' 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 79026 00:19:53.016 06:06:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 79026 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:53.016 00:19:53.016 real 1m5.500s 00:19:53.016 user 3m55.671s 00:19:53.016 sys 0m20.882s 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:53.016 06:06:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:53.016 ************************************ 00:19:53.016 END TEST nvmf_initiator_timeout 00:19:53.016 ************************************ 00:19:53.016 06:06:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:53.016 06:06:08 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:19:53.016 06:06:08 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:53.016 06:06:08 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.016 06:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:53.016 06:06:08 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:53.016 06:06:08 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.016 06:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:53.016 06:06:08 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:19:53.016 06:06:08 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:53.016 06:06:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:53.016 06:06:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.016 06:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:53.016 ************************************ 00:19:53.016 START TEST nvmf_identify 00:19:53.016 ************************************ 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:53.016 * Looking for test storage... 00:19:53.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.016 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:53.017 Cannot find device "nvmf_tgt_br" 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:53.017 Cannot find device "nvmf_tgt_br2" 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:53.017 Cannot find device "nvmf_tgt_br" 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:53.017 Cannot find device "nvmf_tgt_br2" 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:53.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:53.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:53.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:53.017 00:19:53.017 --- 10.0.0.2 ping statistics --- 00:19:53.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.017 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:53.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:53.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:53.017 00:19:53.017 --- 10.0.0.3 ping statistics --- 00:19:53.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.017 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:53.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:53.017 00:19:53.017 --- 10.0.0.1 ping statistics --- 00:19:53.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.017 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:53.017 06:06:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=79939 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 79939 00:19:53.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 79939 ']' 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.018 06:06:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:53.018 [2024-07-11 06:06:08.633194] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:53.018 [2024-07-11 06:06:08.633429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.018 [2024-07-11 06:06:08.809931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.276 [2024-07-11 06:06:09.046750] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.276 [2024-07-11 06:06:09.046862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.276 [2024-07-11 06:06:09.046893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.276 [2024-07-11 06:06:09.046921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.276 [2024-07-11 06:06:09.046932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.276 [2024-07-11 06:06:09.047180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.276 [2024-07-11 06:06:09.048098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.276 [2024-07-11 06:06:09.048295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.277 [2024-07-11 06:06:09.048337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.535 [2024-07-11 06:06:09.257658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:53.794 [2024-07-11 06:06:09.616542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.794 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:54.053 Malloc0 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:54.053 [2024-07-11 06:06:09.774626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:54.053 [ 00:19:54.053 { 00:19:54.053 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:54.053 "subtype": "Discovery", 00:19:54.053 "listen_addresses": [ 00:19:54.053 { 00:19:54.053 "trtype": "TCP", 00:19:54.053 "adrfam": "IPv4", 00:19:54.053 "traddr": "10.0.0.2", 00:19:54.053 "trsvcid": "4420" 00:19:54.053 } 00:19:54.053 ], 00:19:54.053 "allow_any_host": true, 00:19:54.053 "hosts": [] 00:19:54.053 }, 00:19:54.053 { 00:19:54.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.053 "subtype": "NVMe", 00:19:54.053 "listen_addresses": [ 00:19:54.053 { 00:19:54.053 "trtype": "TCP", 00:19:54.053 "adrfam": "IPv4", 00:19:54.053 "traddr": "10.0.0.2", 00:19:54.053 "trsvcid": "4420" 00:19:54.053 } 00:19:54.053 ], 00:19:54.053 "allow_any_host": true, 00:19:54.053 "hosts": [], 00:19:54.053 "serial_number": "SPDK00000000000001", 00:19:54.053 "model_number": "SPDK bdev Controller", 00:19:54.053 "max_namespaces": 32, 00:19:54.053 "min_cntlid": 1, 00:19:54.053 "max_cntlid": 65519, 00:19:54.053 "namespaces": [ 00:19:54.053 { 00:19:54.053 "nsid": 1, 00:19:54.053 "bdev_name": "Malloc0", 00:19:54.053 "name": "Malloc0", 00:19:54.053 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:54.053 "eui64": "ABCDEF0123456789", 00:19:54.053 "uuid": "d5be916c-bb66-4711-87c3-947f9c0516f3" 00:19:54.053 } 00:19:54.053 ] 00:19:54.053 } 00:19:54.053 ] 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.053 06:06:09 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:54.053 [2024-07-11 06:06:09.860015] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:54.053 [2024-07-11 06:06:09.860124] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79974 ] 00:19:54.315 [2024-07-11 06:06:10.025763] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:54.315 [2024-07-11 06:06:10.025916] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:54.315 [2024-07-11 06:06:10.025934] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:54.315 [2024-07-11 06:06:10.025965] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:54.315 [2024-07-11 06:06:10.025982] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:54.315 [2024-07-11 06:06:10.026160] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:54.315 [2024-07-11 06:06:10.026230] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:19:54.315 [2024-07-11 06:06:10.032670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:54.315 [2024-07-11 06:06:10.032715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:54.315 [2024-07-11 06:06:10.032730] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:54.315 [2024-07-11 06:06:10.032742] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:54.315 [2024-07-11 06:06:10.032830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.032846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.032855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.315 [2024-07-11 06:06:10.032879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:54.315 [2024-07-11 06:06:10.032922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.315 [2024-07-11 06:06:10.038834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.315 [2024-07-11 06:06:10.038867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.315 [2024-07-11 06:06:10.038876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.038887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.315 [2024-07-11 06:06:10.038915] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:54.315 [2024-07-11 06:06:10.038937] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:54.315 [2024-07-11 06:06:10.038948] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:54.315 [2024-07-11 06:06:10.038967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.038977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.038985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.315 [2024-07-11 06:06:10.039002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.315 [2024-07-11 06:06:10.039040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.315 [2024-07-11 06:06:10.039137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.315 [2024-07-11 06:06:10.039153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.315 [2024-07-11 06:06:10.039164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.039173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.315 [2024-07-11 06:06:10.039187] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:54.315 [2024-07-11 06:06:10.039204] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:54.315 [2024-07-11 06:06:10.039219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.039227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.039235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.315 [2024-07-11 06:06:10.039253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.315 [2024-07-11 06:06:10.039286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.315 [2024-07-11 06:06:10.039350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.315 [2024-07-11 06:06:10.039365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.315 [2024-07-11 06:06:10.039372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.039380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.315 [2024-07-11 06:06:10.039390] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:54.315 [2024-07-11 06:06:10.039409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:54.315 [2024-07-11 06:06:10.039423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.039431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.039439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.315 [2024-07-11 06:06:10.039453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.315 [2024-07-11 06:06:10.039481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.315 [2024-07-11 06:06:10.039553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.315 [2024-07-11 06:06:10.039565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.315 [2024-07-11 06:06:10.039572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.039579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.315 [2024-07-11 06:06:10.039589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:54.315 [2024-07-11 06:06:10.039607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.039616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.315 [2024-07-11 06:06:10.039628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.315 [2024-07-11 06:06:10.039659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.315 [2024-07-11 06:06:10.039693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.315 [2024-07-11 06:06:10.039766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.316 [2024-07-11 06:06:10.039778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.316 [2024-07-11 06:06:10.039785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.039792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.316 [2024-07-11 06:06:10.039805] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:54.316 [2024-07-11 06:06:10.039816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:54.316 [2024-07-11 06:06:10.039830] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:54.316 [2024-07-11 06:06:10.039940] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:54.316 [2024-07-11 06:06:10.039950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:54.316 [2024-07-11 06:06:10.039966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.039975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.039983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.316 [2024-07-11 06:06:10.040003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.316 [2024-07-11 06:06:10.040032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.316 [2024-07-11 06:06:10.040099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.316 [2024-07-11 06:06:10.040111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.316 [2024-07-11 06:06:10.040117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.316 [2024-07-11 06:06:10.040135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:54.316 [2024-07-11 06:06:10.040153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.316 [2024-07-11 06:06:10.040186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.316 [2024-07-11 06:06:10.040219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.316 [2024-07-11 06:06:10.040286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.316 [2024-07-11 06:06:10.040299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.316 [2024-07-11 06:06:10.040305] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.316 [2024-07-11 06:06:10.040322] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:54.316 [2024-07-11 06:06:10.040332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:54.316 [2024-07-11 06:06:10.040351] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:54.316 [2024-07-11 06:06:10.040368] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:54.316 [2024-07-11 06:06:10.040392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.316 [2024-07-11 06:06:10.040417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.316 [2024-07-11 06:06:10.040460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.316 [2024-07-11 06:06:10.040606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.316 [2024-07-11 06:06:10.040619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.316 [2024-07-11 06:06:10.040625] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040633] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:19:54.316 [2024-07-11 06:06:10.040658] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:19:54.316 [2024-07-11 06:06:10.040668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040687] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040701] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.316 [2024-07-11 06:06:10.040734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.316 [2024-07-11 06:06:10.040740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.316 [2024-07-11 06:06:10.040767] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:54.316 [2024-07-11 06:06:10.040777] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:54.316 [2024-07-11 06:06:10.040796] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:54.316 [2024-07-11 06:06:10.040806] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:54.316 [2024-07-11 06:06:10.040815] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:54.316 [2024-07-11 06:06:10.040824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:54.316 [2024-07-11 06:06:10.040844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:54.316 [2024-07-11 06:06:10.040865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.040884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.316 [2024-07-11 06:06:10.040899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.316 [2024-07-11 06:06:10.040932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.316 [2024-07-11 06:06:10.041008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.316 [2024-07-11 06:06:10.041023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.316 [2024-07-11 06:06:10.041030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.316 [2024-07-11 06:06:10.041051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.316 [2024-07-11 06:06:10.041084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.316 [2024-07-11 06:06:10.041097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:19:54.316 [2024-07-11 06:06:10.041122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.316 [2024-07-11 06:06:10.041135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:19:54.316 [2024-07-11 06:06:10.041160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.316 [2024-07-11 06:06:10.041170] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.316 [2024-07-11 06:06:10.041194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.316 [2024-07-11 06:06:10.041203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:54.316 [2024-07-11 06:06:10.041221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:54.316 [2024-07-11 06:06:10.041237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.316 [2024-07-11 06:06:10.041260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.316 [2024-07-11 06:06:10.041292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.316 [2024-07-11 06:06:10.041304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:19:54.316 [2024-07-11 06:06:10.041312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:19:54.316 [2024-07-11 06:06:10.041320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.316 [2024-07-11 06:06:10.041328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.316 [2024-07-11 06:06:10.041448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.316 [2024-07-11 06:06:10.041473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.316 [2024-07-11 06:06:10.041482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.316 [2024-07-11 06:06:10.041500] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:54.316 [2024-07-11 06:06:10.041510] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:54.316 [2024-07-11 06:06:10.041534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.316 [2024-07-11 06:06:10.041544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.316 [2024-07-11 06:06:10.041559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.316 [2024-07-11 06:06:10.041592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.316 [2024-07-11 06:06:10.041700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.316 [2024-07-11 06:06:10.041715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.316 [2024-07-11 06:06:10.041722] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.041736] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:19:54.317 [2024-07-11 06:06:10.041746] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:19:54.317 [2024-07-11 06:06:10.041754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.041768] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.041776] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.041790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.317 [2024-07-11 06:06:10.041800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.317 [2024-07-11 06:06:10.041807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.041815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.317 [2024-07-11 06:06:10.041846] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:54.317 [2024-07-11 06:06:10.041908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.041921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.317 [2024-07-11 06:06:10.041937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.317 [2024-07-11 06:06:10.041951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.041969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.041976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:19:54.317 [2024-07-11 06:06:10.041992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.317 [2024-07-11 06:06:10.042031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.317 [2024-07-11 06:06:10.042045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:19:54.317 [2024-07-11 06:06:10.042352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.317 [2024-07-11 06:06:10.042382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.317 [2024-07-11 06:06:10.042391] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.042399] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:19:54.317 [2024-07-11 06:06:10.042408] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:19:54.317 [2024-07-11 06:06:10.042416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.042432] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.042441] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.042456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.317 [2024-07-11 06:06:10.042471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.317 [2024-07-11 06:06:10.042478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.042486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:19:54.317 [2024-07-11 06:06:10.042514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.317 [2024-07-11 06:06:10.042526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.317 [2024-07-11 06:06:10.042532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.042539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.317 [2024-07-11 06:06:10.042566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.042575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.317 [2024-07-11 06:06:10.042596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.317 [2024-07-11 06:06:10.042633] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.317 [2024-07-11 06:06:10.046784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.317 [2024-07-11 06:06:10.046803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.317 [2024-07-11 06:06:10.046810] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.046817] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:19:54.317 [2024-07-11 06:06:10.046826] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:19:54.317 [2024-07-11 06:06:10.046833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.046846] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.046853] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.046863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.317 [2024-07-11 06:06:10.046876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.317 [2024-07-11 06:06:10.046882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.046890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.317 [2024-07-11 06:06:10.046915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.046925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.317 [2024-07-11 06:06:10.046941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.317 [2024-07-11 06:06:10.046985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.317 [2024-07-11 06:06:10.047100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.317 [2024-07-11 06:06:10.047115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.317 [2024-07-11 06:06:10.047122] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.047132] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:19:54.317 [2024-07-11 06:06:10.047140] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:19:54.317 [2024-07-11 06:06:10.047158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.047174] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.047181] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.047207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.317 [2024-07-11 06:06:10.047219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.317 [2024-07-11 06:06:10.047226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.317 [2024-07-11 06:06:10.047233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.317 ===================================================== 00:19:54.317 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:54.317 ===================================================== 00:19:54.317 Controller Capabilities/Features 00:19:54.317 ================================ 00:19:54.317 Vendor ID: 0000 00:19:54.317 Subsystem Vendor ID: 0000 00:19:54.317 Serial Number: .................... 00:19:54.317 Model Number: ........................................ 00:19:54.317 Firmware Version: 24.09 00:19:54.317 Recommended Arb Burst: 0 00:19:54.317 IEEE OUI Identifier: 00 00 00 00:19:54.317 Multi-path I/O 00:19:54.317 May have multiple subsystem ports: No 00:19:54.317 May have multiple controllers: No 00:19:54.317 Associated with SR-IOV VF: No 00:19:54.317 Max Data Transfer Size: 131072 00:19:54.317 Max Number of Namespaces: 0 00:19:54.317 Max Number of I/O Queues: 1024 00:19:54.317 NVMe Specification Version (VS): 1.3 00:19:54.317 NVMe Specification Version (Identify): 1.3 00:19:54.317 Maximum Queue Entries: 128 00:19:54.317 Contiguous Queues Required: Yes 00:19:54.317 Arbitration Mechanisms Supported 00:19:54.317 Weighted Round Robin: Not Supported 00:19:54.317 Vendor Specific: Not Supported 00:19:54.317 Reset Timeout: 15000 ms 00:19:54.317 Doorbell Stride: 4 bytes 00:19:54.317 NVM Subsystem Reset: Not Supported 00:19:54.317 Command Sets Supported 00:19:54.317 NVM Command Set: Supported 00:19:54.317 Boot Partition: Not Supported 00:19:54.317 Memory Page Size Minimum: 4096 bytes 00:19:54.317 Memory Page Size Maximum: 4096 bytes 00:19:54.317 Persistent Memory Region: Not Supported 00:19:54.317 Optional Asynchronous Events Supported 00:19:54.317 Namespace Attribute Notices: Not Supported 00:19:54.317 Firmware Activation Notices: Not Supported 00:19:54.317 ANA Change Notices: Not Supported 00:19:54.317 PLE Aggregate Log Change Notices: Not Supported 00:19:54.317 LBA Status Info Alert Notices: Not Supported 00:19:54.317 EGE Aggregate Log Change Notices: Not Supported 00:19:54.317 Normal NVM Subsystem Shutdown event: Not Supported 00:19:54.317 Zone Descriptor Change Notices: Not Supported 00:19:54.317 Discovery Log Change Notices: Supported 00:19:54.317 Controller Attributes 00:19:54.317 128-bit Host Identifier: Not Supported 00:19:54.317 Non-Operational Permissive Mode: Not Supported 00:19:54.317 NVM Sets: Not Supported 00:19:54.317 Read Recovery Levels: Not Supported 00:19:54.317 Endurance Groups: Not Supported 00:19:54.317 Predictable Latency Mode: Not Supported 00:19:54.317 Traffic Based Keep ALive: Not Supported 00:19:54.317 Namespace Granularity: Not Supported 00:19:54.317 SQ Associations: Not Supported 00:19:54.317 UUID List: Not Supported 00:19:54.317 Multi-Domain Subsystem: Not Supported 00:19:54.317 Fixed Capacity Management: Not Supported 00:19:54.317 Variable Capacity Management: Not Supported 00:19:54.317 Delete Endurance Group: Not Supported 00:19:54.317 Delete NVM Set: Not Supported 00:19:54.317 Extended LBA Formats Supported: Not Supported 00:19:54.317 Flexible Data Placement Supported: Not Supported 00:19:54.317 00:19:54.317 Controller Memory Buffer Support 00:19:54.317 ================================ 00:19:54.317 Supported: No 00:19:54.317 00:19:54.317 Persistent Memory Region Support 00:19:54.317 ================================ 00:19:54.317 Supported: No 00:19:54.317 00:19:54.317 Admin Command Set Attributes 00:19:54.317 ============================ 00:19:54.317 Security Send/Receive: Not Supported 00:19:54.317 Format NVM: Not Supported 00:19:54.317 Firmware Activate/Download: Not Supported 00:19:54.317 Namespace Management: Not Supported 00:19:54.318 Device Self-Test: Not Supported 00:19:54.318 Directives: Not Supported 00:19:54.318 NVMe-MI: Not Supported 00:19:54.318 Virtualization Management: Not Supported 00:19:54.318 Doorbell Buffer Config: Not Supported 00:19:54.318 Get LBA Status Capability: Not Supported 00:19:54.318 Command & Feature Lockdown Capability: Not Supported 00:19:54.318 Abort Command Limit: 1 00:19:54.318 Async Event Request Limit: 4 00:19:54.318 Number of Firmware Slots: N/A 00:19:54.318 Firmware Slot 1 Read-Only: N/A 00:19:54.318 Firmware Activation Without Reset: N/A 00:19:54.318 Multiple Update Detection Support: N/A 00:19:54.318 Firmware Update Granularity: No Information Provided 00:19:54.318 Per-Namespace SMART Log: No 00:19:54.318 Asymmetric Namespace Access Log Page: Not Supported 00:19:54.318 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:54.318 Command Effects Log Page: Not Supported 00:19:54.318 Get Log Page Extended Data: Supported 00:19:54.318 Telemetry Log Pages: Not Supported 00:19:54.318 Persistent Event Log Pages: Not Supported 00:19:54.318 Supported Log Pages Log Page: May Support 00:19:54.318 Commands Supported & Effects Log Page: Not Supported 00:19:54.318 Feature Identifiers & Effects Log Page:May Support 00:19:54.318 NVMe-MI Commands & Effects Log Page: May Support 00:19:54.318 Data Area 4 for Telemetry Log: Not Supported 00:19:54.318 Error Log Page Entries Supported: 128 00:19:54.318 Keep Alive: Not Supported 00:19:54.318 00:19:54.318 NVM Command Set Attributes 00:19:54.318 ========================== 00:19:54.318 Submission Queue Entry Size 00:19:54.318 Max: 1 00:19:54.318 Min: 1 00:19:54.318 Completion Queue Entry Size 00:19:54.318 Max: 1 00:19:54.318 Min: 1 00:19:54.318 Number of Namespaces: 0 00:19:54.318 Compare Command: Not Supported 00:19:54.318 Write Uncorrectable Command: Not Supported 00:19:54.318 Dataset Management Command: Not Supported 00:19:54.318 Write Zeroes Command: Not Supported 00:19:54.318 Set Features Save Field: Not Supported 00:19:54.318 Reservations: Not Supported 00:19:54.318 Timestamp: Not Supported 00:19:54.318 Copy: Not Supported 00:19:54.318 Volatile Write Cache: Not Present 00:19:54.318 Atomic Write Unit (Normal): 1 00:19:54.318 Atomic Write Unit (PFail): 1 00:19:54.318 Atomic Compare & Write Unit: 1 00:19:54.318 Fused Compare & Write: Supported 00:19:54.318 Scatter-Gather List 00:19:54.318 SGL Command Set: Supported 00:19:54.318 SGL Keyed: Supported 00:19:54.318 SGL Bit Bucket Descriptor: Not Supported 00:19:54.318 SGL Metadata Pointer: Not Supported 00:19:54.318 Oversized SGL: Not Supported 00:19:54.318 SGL Metadata Address: Not Supported 00:19:54.318 SGL Offset: Supported 00:19:54.318 Transport SGL Data Block: Not Supported 00:19:54.318 Replay Protected Memory Block: Not Supported 00:19:54.318 00:19:54.318 Firmware Slot Information 00:19:54.318 ========================= 00:19:54.318 Active slot: 0 00:19:54.318 00:19:54.318 00:19:54.318 Error Log 00:19:54.318 ========= 00:19:54.318 00:19:54.318 Active Namespaces 00:19:54.318 ================= 00:19:54.318 Discovery Log Page 00:19:54.318 ================== 00:19:54.318 Generation Counter: 2 00:19:54.318 Number of Records: 2 00:19:54.318 Record Format: 0 00:19:54.318 00:19:54.318 Discovery Log Entry 0 00:19:54.318 ---------------------- 00:19:54.318 Transport Type: 3 (TCP) 00:19:54.318 Address Family: 1 (IPv4) 00:19:54.318 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:54.318 Entry Flags: 00:19:54.318 Duplicate Returned Information: 1 00:19:54.318 Explicit Persistent Connection Support for Discovery: 1 00:19:54.318 Transport Requirements: 00:19:54.318 Secure Channel: Not Required 00:19:54.318 Port ID: 0 (0x0000) 00:19:54.318 Controller ID: 65535 (0xffff) 00:19:54.318 Admin Max SQ Size: 128 00:19:54.318 Transport Service Identifier: 4420 00:19:54.318 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:54.318 Transport Address: 10.0.0.2 00:19:54.318 Discovery Log Entry 1 00:19:54.318 ---------------------- 00:19:54.318 Transport Type: 3 (TCP) 00:19:54.318 Address Family: 1 (IPv4) 00:19:54.318 Subsystem Type: 2 (NVM Subsystem) 00:19:54.318 Entry Flags: 00:19:54.318 Duplicate Returned Information: 0 00:19:54.318 Explicit Persistent Connection Support for Discovery: 0 00:19:54.318 Transport Requirements: 00:19:54.318 Secure Channel: Not Required 00:19:54.318 Port ID: 0 (0x0000) 00:19:54.318 Controller ID: 65535 (0xffff) 00:19:54.318 Admin Max SQ Size: 128 00:19:54.318 Transport Service Identifier: 4420 00:19:54.318 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:54.318 Transport Address: 10.0.0.2 [2024-07-11 06:06:10.047385] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:54.318 [2024-07-11 06:06:10.047410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.318 [2024-07-11 06:06:10.047429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.318 [2024-07-11 06:06:10.047439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:19:54.318 [2024-07-11 06:06:10.047449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.318 [2024-07-11 06:06:10.047458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:19:54.318 [2024-07-11 06:06:10.047467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.318 [2024-07-11 06:06:10.047475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.318 [2024-07-11 06:06:10.047484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.318 [2024-07-11 06:06:10.047500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.047512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.047520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.318 [2024-07-11 06:06:10.047535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.318 [2024-07-11 06:06:10.047568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.318 [2024-07-11 06:06:10.047636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.318 [2024-07-11 06:06:10.047669] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.318 [2024-07-11 06:06:10.047678] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.047687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.318 [2024-07-11 06:06:10.047702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.047711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.047723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.318 [2024-07-11 06:06:10.047738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.318 [2024-07-11 06:06:10.047777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.318 [2024-07-11 06:06:10.047897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.318 [2024-07-11 06:06:10.047915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.318 [2024-07-11 06:06:10.047922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.047929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.318 [2024-07-11 06:06:10.047939] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:54.318 [2024-07-11 06:06:10.047953] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:54.318 [2024-07-11 06:06:10.047973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.047982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.047990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.318 [2024-07-11 06:06:10.048010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.318 [2024-07-11 06:06:10.048039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.318 [2024-07-11 06:06:10.048110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.318 [2024-07-11 06:06:10.048122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.318 [2024-07-11 06:06:10.048128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.048135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.318 [2024-07-11 06:06:10.048154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.048162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.048169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.318 [2024-07-11 06:06:10.048186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.318 [2024-07-11 06:06:10.048213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.318 [2024-07-11 06:06:10.048292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.318 [2024-07-11 06:06:10.048305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.318 [2024-07-11 06:06:10.048311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.048318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.318 [2024-07-11 06:06:10.048341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.048350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.318 [2024-07-11 06:06:10.048357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.319 [2024-07-11 06:06:10.048370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.319 [2024-07-11 06:06:10.048397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.319 [2024-07-11 06:06:10.048467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.319 [2024-07-11 06:06:10.048482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.319 [2024-07-11 06:06:10.048488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.048495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.319 [2024-07-11 06:06:10.048513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.048521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.048528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.319 [2024-07-11 06:06:10.048541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.319 [2024-07-11 06:06:10.048567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.319 [2024-07-11 06:06:10.048662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.319 [2024-07-11 06:06:10.048680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.319 [2024-07-11 06:06:10.048687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.048694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.319 [2024-07-11 06:06:10.048712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.048721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.048727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.319 [2024-07-11 06:06:10.048741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.319 [2024-07-11 06:06:10.048776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.319 [2024-07-11 06:06:10.048842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.319 [2024-07-11 06:06:10.048854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.319 [2024-07-11 06:06:10.048860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.048867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.319 [2024-07-11 06:06:10.048884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.048893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.048899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.319 [2024-07-11 06:06:10.048916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.319 [2024-07-11 06:06:10.048942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.319 [2024-07-11 06:06:10.049014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.319 [2024-07-11 06:06:10.049026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.319 [2024-07-11 06:06:10.049033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.319 [2024-07-11 06:06:10.049062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.319 [2024-07-11 06:06:10.049091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.319 [2024-07-11 06:06:10.049116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.319 [2024-07-11 06:06:10.049179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.319 [2024-07-11 06:06:10.049191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.319 [2024-07-11 06:06:10.049206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.319 [2024-07-11 06:06:10.049238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.319 [2024-07-11 06:06:10.049270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.319 [2024-07-11 06:06:10.049298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.319 [2024-07-11 06:06:10.049356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.319 [2024-07-11 06:06:10.049368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.319 [2024-07-11 06:06:10.049375] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.319 [2024-07-11 06:06:10.049399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.319 [2024-07-11 06:06:10.049430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.319 [2024-07-11 06:06:10.049456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.319 [2024-07-11 06:06:10.049519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.319 [2024-07-11 06:06:10.049531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.319 [2024-07-11 06:06:10.049537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.319 [2024-07-11 06:06:10.049561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.319 [2024-07-11 06:06:10.049589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.319 [2024-07-11 06:06:10.049614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.319 [2024-07-11 06:06:10.049690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.319 [2024-07-11 06:06:10.049704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.319 [2024-07-11 06:06:10.049710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.319 [2024-07-11 06:06:10.049735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.319 [2024-07-11 06:06:10.049778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.319 [2024-07-11 06:06:10.049808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.319 [2024-07-11 06:06:10.049867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.319 [2024-07-11 06:06:10.049879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.319 [2024-07-11 06:06:10.049885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.319 [2024-07-11 06:06:10.049909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.049928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.319 [2024-07-11 06:06:10.049941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.319 [2024-07-11 06:06:10.049967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.319 [2024-07-11 06:06:10.050040] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.319 [2024-07-11 06:06:10.050054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.319 [2024-07-11 06:06:10.050063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.050070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.319 [2024-07-11 06:06:10.050088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.319 [2024-07-11 06:06:10.050096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.050103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.320 [2024-07-11 06:06:10.050116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.320 [2024-07-11 06:06:10.050141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.320 [2024-07-11 06:06:10.050223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.320 [2024-07-11 06:06:10.050241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.320 [2024-07-11 06:06:10.050248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.050255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.320 [2024-07-11 06:06:10.050273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.050281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.050288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.320 [2024-07-11 06:06:10.050301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.320 [2024-07-11 06:06:10.050327] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.320 [2024-07-11 06:06:10.050386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.320 [2024-07-11 06:06:10.050397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.320 [2024-07-11 06:06:10.050403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.050410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.320 [2024-07-11 06:06:10.050427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.050435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.050442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.320 [2024-07-11 06:06:10.050455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.320 [2024-07-11 06:06:10.050486] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.320 [2024-07-11 06:06:10.050550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.320 [2024-07-11 06:06:10.050561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.320 [2024-07-11 06:06:10.050567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.050574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.320 [2024-07-11 06:06:10.050591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.050603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.050610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.320 [2024-07-11 06:06:10.050623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.320 [2024-07-11 06:06:10.053732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.320 [2024-07-11 06:06:10.053786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.320 [2024-07-11 06:06:10.053800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.320 [2024-07-11 06:06:10.053806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.053814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.320 [2024-07-11 06:06:10.053837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.053846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.053853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.320 [2024-07-11 06:06:10.053876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.320 [2024-07-11 06:06:10.053910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.320 [2024-07-11 06:06:10.053985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.320 [2024-07-11 06:06:10.053997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.320 [2024-07-11 06:06:10.054004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.320 [2024-07-11 06:06:10.054011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.320 [2024-07-11 06:06:10.054032] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:19:54.320 00:19:54.320 06:06:10 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:54.320 [2024-07-11 06:06:10.167063] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:54.320 [2024-07-11 06:06:10.167182] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79984 ] 00:19:54.582 [2024-07-11 06:06:10.339350] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:54.582 [2024-07-11 06:06:10.339485] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:54.582 [2024-07-11 06:06:10.339502] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:54.582 [2024-07-11 06:06:10.339531] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:54.582 [2024-07-11 06:06:10.339548] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:54.582 [2024-07-11 06:06:10.339725] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:54.582 [2024-07-11 06:06:10.339794] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:19:54.582 [2024-07-11 06:06:10.345817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:54.582 [2024-07-11 06:06:10.345856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:54.582 [2024-07-11 06:06:10.345874] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:54.582 [2024-07-11 06:06:10.345882] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:54.582 [2024-07-11 06:06:10.345970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.582 [2024-07-11 06:06:10.345985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.345996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.583 [2024-07-11 06:06:10.346021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:54.583 [2024-07-11 06:06:10.346070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.583 [2024-07-11 06:06:10.353673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.583 [2024-07-11 06:06:10.353702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.583 [2024-07-11 06:06:10.353712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.353729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.583 [2024-07-11 06:06:10.353748] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:54.583 [2024-07-11 06:06:10.353764] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:54.583 [2024-07-11 06:06:10.353776] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:54.583 [2024-07-11 06:06:10.353799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.353809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.353817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.583 [2024-07-11 06:06:10.353835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.583 [2024-07-11 06:06:10.353873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.583 [2024-07-11 06:06:10.354327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.583 [2024-07-11 06:06:10.354352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.583 [2024-07-11 06:06:10.354365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.354376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.583 [2024-07-11 06:06:10.354390] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:54.583 [2024-07-11 06:06:10.354406] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:54.583 [2024-07-11 06:06:10.354419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.354428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.354436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.583 [2024-07-11 06:06:10.354458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.583 [2024-07-11 06:06:10.354490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.583 [2024-07-11 06:06:10.354570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.583 [2024-07-11 06:06:10.354582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.583 [2024-07-11 06:06:10.354588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.354595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.583 [2024-07-11 06:06:10.354606] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:54.583 [2024-07-11 06:06:10.354620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:54.583 [2024-07-11 06:06:10.354633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.354657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.354666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.583 [2024-07-11 06:06:10.354685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.583 [2024-07-11 06:06:10.354717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.583 [2024-07-11 06:06:10.355107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.583 [2024-07-11 06:06:10.355130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.583 [2024-07-11 06:06:10.355138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.355145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.583 [2024-07-11 06:06:10.355157] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:54.583 [2024-07-11 06:06:10.355176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.355190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.355198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.583 [2024-07-11 06:06:10.355212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.583 [2024-07-11 06:06:10.355242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.583 [2024-07-11 06:06:10.355316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.583 [2024-07-11 06:06:10.355327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.583 [2024-07-11 06:06:10.355334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.355344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.583 [2024-07-11 06:06:10.355354] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:54.583 [2024-07-11 06:06:10.355363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:54.583 [2024-07-11 06:06:10.355377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:54.583 [2024-07-11 06:06:10.355494] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:54.583 [2024-07-11 06:06:10.355503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:54.583 [2024-07-11 06:06:10.355518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.355527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.355539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.583 [2024-07-11 06:06:10.355553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.583 [2024-07-11 06:06:10.355582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.583 [2024-07-11 06:06:10.356050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.583 [2024-07-11 06:06:10.356076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.583 [2024-07-11 06:06:10.356085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.356092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.583 [2024-07-11 06:06:10.356102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:54.583 [2024-07-11 06:06:10.356122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.356132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.356139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.583 [2024-07-11 06:06:10.356154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.583 [2024-07-11 06:06:10.356198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.583 [2024-07-11 06:06:10.356266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.583 [2024-07-11 06:06:10.356289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.583 [2024-07-11 06:06:10.356296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.356303] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.583 [2024-07-11 06:06:10.356313] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:54.583 [2024-07-11 06:06:10.356327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:54.583 [2024-07-11 06:06:10.356341] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:54.583 [2024-07-11 06:06:10.356359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:54.583 [2024-07-11 06:06:10.356382] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.356393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.583 [2024-07-11 06:06:10.356409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.583 [2024-07-11 06:06:10.356453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.583 [2024-07-11 06:06:10.357015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.583 [2024-07-11 06:06:10.357040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.583 [2024-07-11 06:06:10.357049] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.357057] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:19:54.583 [2024-07-11 06:06:10.357067] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:19:54.583 [2024-07-11 06:06:10.357076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.357095] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.357106] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.357123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.583 [2024-07-11 06:06:10.357137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.583 [2024-07-11 06:06:10.357143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.357151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.583 [2024-07-11 06:06:10.357170] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:54.583 [2024-07-11 06:06:10.357180] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:54.583 [2024-07-11 06:06:10.357188] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:54.583 [2024-07-11 06:06:10.357197] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:54.583 [2024-07-11 06:06:10.357205] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:54.583 [2024-07-11 06:06:10.357214] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:54.583 [2024-07-11 06:06:10.357233] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:54.583 [2024-07-11 06:06:10.357251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.357265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.583 [2024-07-11 06:06:10.357273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.584 [2024-07-11 06:06:10.357289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.584 [2024-07-11 06:06:10.357322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.584 [2024-07-11 06:06:10.361665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.584 [2024-07-11 06:06:10.361696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.584 [2024-07-11 06:06:10.361710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.361719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.584 [2024-07-11 06:06:10.361736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.361745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.361753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:19:54.584 [2024-07-11 06:06:10.361771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.584 [2024-07-11 06:06:10.361784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.361791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.361798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:19:54.584 [2024-07-11 06:06:10.361809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.584 [2024-07-11 06:06:10.361819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.361825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.361839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:19:54.584 [2024-07-11 06:06:10.361850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.584 [2024-07-11 06:06:10.361860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.361867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.361873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.584 [2024-07-11 06:06:10.361884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.584 [2024-07-11 06:06:10.361893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.361914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.361929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.361937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.584 [2024-07-11 06:06:10.361951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.584 [2024-07-11 06:06:10.361988] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:19:54.584 [2024-07-11 06:06:10.362000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:19:54.584 [2024-07-11 06:06:10.362012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:19:54.584 [2024-07-11 06:06:10.362021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.584 [2024-07-11 06:06:10.362028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.584 [2024-07-11 06:06:10.362671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.584 [2024-07-11 06:06:10.362695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.584 [2024-07-11 06:06:10.362704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.362711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.584 [2024-07-11 06:06:10.362723] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:54.584 [2024-07-11 06:06:10.362734] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.362748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.362759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.362771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.362781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.362789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.584 [2024-07-11 06:06:10.362809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.584 [2024-07-11 06:06:10.362847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.584 [2024-07-11 06:06:10.362919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.584 [2024-07-11 06:06:10.362931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.584 [2024-07-11 06:06:10.362937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.362944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.584 [2024-07-11 06:06:10.363049] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.363080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.363099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.363108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.584 [2024-07-11 06:06:10.363123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.584 [2024-07-11 06:06:10.363155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.584 [2024-07-11 06:06:10.363600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.584 [2024-07-11 06:06:10.363625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.584 [2024-07-11 06:06:10.363634] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.363654] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:19:54.584 [2024-07-11 06:06:10.363664] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:19:54.584 [2024-07-11 06:06:10.363678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.363692] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.363700] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.363714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.584 [2024-07-11 06:06:10.363724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.584 [2024-07-11 06:06:10.363730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.363737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.584 [2024-07-11 06:06:10.363779] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:54.584 [2024-07-11 06:06:10.363800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.363824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.363847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.363857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.584 [2024-07-11 06:06:10.363875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.584 [2024-07-11 06:06:10.363910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.584 [2024-07-11 06:06:10.364371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.584 [2024-07-11 06:06:10.364396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.584 [2024-07-11 06:06:10.364404] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.364415] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:19:54.584 [2024-07-11 06:06:10.364424] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:19:54.584 [2024-07-11 06:06:10.364431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.364443] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.364451] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.364464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.584 [2024-07-11 06:06:10.364474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.584 [2024-07-11 06:06:10.364480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.364487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.584 [2024-07-11 06:06:10.364532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.364555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:54.584 [2024-07-11 06:06:10.364577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.364587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.584 [2024-07-11 06:06:10.364602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.584 [2024-07-11 06:06:10.364634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.584 [2024-07-11 06:06:10.365182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.584 [2024-07-11 06:06:10.365204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.584 [2024-07-11 06:06:10.365213] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.365220] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:19:54.584 [2024-07-11 06:06:10.365228] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:19:54.584 [2024-07-11 06:06:10.365235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.365251] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.365259] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.365272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.584 [2024-07-11 06:06:10.365282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.584 [2024-07-11 06:06:10.365289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.584 [2024-07-11 06:06:10.365296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.584 [2024-07-11 06:06:10.365328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:54.585 [2024-07-11 06:06:10.365358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:54.585 [2024-07-11 06:06:10.365375] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:54.585 [2024-07-11 06:06:10.365386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:54.585 [2024-07-11 06:06:10.365395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:54.585 [2024-07-11 06:06:10.365404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:54.585 [2024-07-11 06:06:10.365413] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:54.585 [2024-07-11 06:06:10.365424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:54.585 [2024-07-11 06:06:10.365434] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:54.585 [2024-07-11 06:06:10.365472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.365483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.585 [2024-07-11 06:06:10.365498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.585 [2024-07-11 06:06:10.365511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.365519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.365530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:19:54.585 [2024-07-11 06:06:10.365543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.585 [2024-07-11 06:06:10.365581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.585 [2024-07-11 06:06:10.365594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:19:54.585 [2024-07-11 06:06:10.369668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.585 [2024-07-11 06:06:10.369699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.585 [2024-07-11 06:06:10.369708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.369717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.585 [2024-07-11 06:06:10.369731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.585 [2024-07-11 06:06:10.369741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.585 [2024-07-11 06:06:10.369747] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.369754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:19:54.585 [2024-07-11 06:06:10.369779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.369790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:19:54.585 [2024-07-11 06:06:10.369805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.585 [2024-07-11 06:06:10.369840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:19:54.585 [2024-07-11 06:06:10.369939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.585 [2024-07-11 06:06:10.369951] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.585 [2024-07-11 06:06:10.369957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.369964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:19:54.585 [2024-07-11 06:06:10.369982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.369990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:19:54.585 [2024-07-11 06:06:10.370004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.585 [2024-07-11 06:06:10.370038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:19:54.585 [2024-07-11 06:06:10.370444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.585 [2024-07-11 06:06:10.370497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.585 [2024-07-11 06:06:10.370506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.370513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:19:54.585 [2024-07-11 06:06:10.370535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.370545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:19:54.585 [2024-07-11 06:06:10.370560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.585 [2024-07-11 06:06:10.370588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:19:54.585 [2024-07-11 06:06:10.370681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.585 [2024-07-11 06:06:10.370695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.585 [2024-07-11 06:06:10.370701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.370708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:19:54.585 [2024-07-11 06:06:10.370743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.370754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:19:54.585 [2024-07-11 06:06:10.370769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.585 [2024-07-11 06:06:10.370784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.370797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:19:54.585 [2024-07-11 06:06:10.370809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.585 [2024-07-11 06:06:10.370825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.370833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:19:54.585 [2024-07-11 06:06:10.370845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.585 [2024-07-11 06:06:10.370864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.370873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:19:54.585 [2024-07-11 06:06:10.370885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.585 [2024-07-11 06:06:10.370917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:19:54.585 [2024-07-11 06:06:10.370929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:19:54.585 [2024-07-11 06:06:10.370937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:19:54.585 [2024-07-11 06:06:10.370944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:19:54.585 [2024-07-11 06:06:10.371490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.585 [2024-07-11 06:06:10.371530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.585 [2024-07-11 06:06:10.371546] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371554] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:19:54.585 [2024-07-11 06:06:10.371563] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:19:54.585 [2024-07-11 06:06:10.371571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371602] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371611] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.585 [2024-07-11 06:06:10.371637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.585 [2024-07-11 06:06:10.371665] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371675] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:19:54.585 [2024-07-11 06:06:10.371684] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:19:54.585 [2024-07-11 06:06:10.371691] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371703] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371709] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.585 [2024-07-11 06:06:10.371728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.585 [2024-07-11 06:06:10.371734] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371740] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:19:54.585 [2024-07-11 06:06:10.371748] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:19:54.585 [2024-07-11 06:06:10.371754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371772] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371779] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:54.585 [2024-07-11 06:06:10.371797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:54.585 [2024-07-11 06:06:10.371803] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371809] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:19:54.585 [2024-07-11 06:06:10.371816] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:19:54.585 [2024-07-11 06:06:10.371823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371834] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371840] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.585 [2024-07-11 06:06:10.371864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.585 [2024-07-11 06:06:10.371870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:19:54.585 [2024-07-11 06:06:10.371906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.585 [2024-07-11 06:06:10.371917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.585 [2024-07-11 06:06:10.371923] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.585 [2024-07-11 06:06:10.371930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:19:54.585 [2024-07-11 06:06:10.371949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.585 [2024-07-11 06:06:10.371959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.586 [2024-07-11 06:06:10.371965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.586 [2024-07-11 06:06:10.371974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:19:54.586 [2024-07-11 06:06:10.371988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.586 [2024-07-11 06:06:10.371998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.586 [2024-07-11 06:06:10.372004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.586 [2024-07-11 06:06:10.372010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:19:54.586 ===================================================== 00:19:54.586 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:54.586 ===================================================== 00:19:54.586 Controller Capabilities/Features 00:19:54.586 ================================ 00:19:54.586 Vendor ID: 8086 00:19:54.586 Subsystem Vendor ID: 8086 00:19:54.586 Serial Number: SPDK00000000000001 00:19:54.586 Model Number: SPDK bdev Controller 00:19:54.586 Firmware Version: 24.09 00:19:54.586 Recommended Arb Burst: 6 00:19:54.586 IEEE OUI Identifier: e4 d2 5c 00:19:54.586 Multi-path I/O 00:19:54.586 May have multiple subsystem ports: Yes 00:19:54.586 May have multiple controllers: Yes 00:19:54.586 Associated with SR-IOV VF: No 00:19:54.586 Max Data Transfer Size: 131072 00:19:54.586 Max Number of Namespaces: 32 00:19:54.586 Max Number of I/O Queues: 127 00:19:54.586 NVMe Specification Version (VS): 1.3 00:19:54.586 NVMe Specification Version (Identify): 1.3 00:19:54.586 Maximum Queue Entries: 128 00:19:54.586 Contiguous Queues Required: Yes 00:19:54.586 Arbitration Mechanisms Supported 00:19:54.586 Weighted Round Robin: Not Supported 00:19:54.586 Vendor Specific: Not Supported 00:19:54.586 Reset Timeout: 15000 ms 00:19:54.586 Doorbell Stride: 4 bytes 00:19:54.586 NVM Subsystem Reset: Not Supported 00:19:54.586 Command Sets Supported 00:19:54.586 NVM Command Set: Supported 00:19:54.586 Boot Partition: Not Supported 00:19:54.586 Memory Page Size Minimum: 4096 bytes 00:19:54.586 Memory Page Size Maximum: 4096 bytes 00:19:54.586 Persistent Memory Region: Not Supported 00:19:54.586 Optional Asynchronous Events Supported 00:19:54.586 Namespace Attribute Notices: Supported 00:19:54.586 Firmware Activation Notices: Not Supported 00:19:54.586 ANA Change Notices: Not Supported 00:19:54.586 PLE Aggregate Log Change Notices: Not Supported 00:19:54.586 LBA Status Info Alert Notices: Not Supported 00:19:54.586 EGE Aggregate Log Change Notices: Not Supported 00:19:54.586 Normal NVM Subsystem Shutdown event: Not Supported 00:19:54.586 Zone Descriptor Change Notices: Not Supported 00:19:54.586 Discovery Log Change Notices: Not Supported 00:19:54.586 Controller Attributes 00:19:54.586 128-bit Host Identifier: Supported 00:19:54.586 Non-Operational Permissive Mode: Not Supported 00:19:54.586 NVM Sets: Not Supported 00:19:54.586 Read Recovery Levels: Not Supported 00:19:54.586 Endurance Groups: Not Supported 00:19:54.586 Predictable Latency Mode: Not Supported 00:19:54.586 Traffic Based Keep ALive: Not Supported 00:19:54.586 Namespace Granularity: Not Supported 00:19:54.586 SQ Associations: Not Supported 00:19:54.586 UUID List: Not Supported 00:19:54.586 Multi-Domain Subsystem: Not Supported 00:19:54.586 Fixed Capacity Management: Not Supported 00:19:54.586 Variable Capacity Management: Not Supported 00:19:54.586 Delete Endurance Group: Not Supported 00:19:54.586 Delete NVM Set: Not Supported 00:19:54.586 Extended LBA Formats Supported: Not Supported 00:19:54.586 Flexible Data Placement Supported: Not Supported 00:19:54.586 00:19:54.586 Controller Memory Buffer Support 00:19:54.586 ================================ 00:19:54.586 Supported: No 00:19:54.586 00:19:54.586 Persistent Memory Region Support 00:19:54.586 ================================ 00:19:54.586 Supported: No 00:19:54.586 00:19:54.586 Admin Command Set Attributes 00:19:54.586 ============================ 00:19:54.586 Security Send/Receive: Not Supported 00:19:54.586 Format NVM: Not Supported 00:19:54.586 Firmware Activate/Download: Not Supported 00:19:54.586 Namespace Management: Not Supported 00:19:54.586 Device Self-Test: Not Supported 00:19:54.586 Directives: Not Supported 00:19:54.586 NVMe-MI: Not Supported 00:19:54.586 Virtualization Management: Not Supported 00:19:54.586 Doorbell Buffer Config: Not Supported 00:19:54.586 Get LBA Status Capability: Not Supported 00:19:54.586 Command & Feature Lockdown Capability: Not Supported 00:19:54.586 Abort Command Limit: 4 00:19:54.586 Async Event Request Limit: 4 00:19:54.586 Number of Firmware Slots: N/A 00:19:54.586 Firmware Slot 1 Read-Only: N/A 00:19:54.586 Firmware Activation Without Reset: N/A 00:19:54.586 Multiple Update Detection Support: N/A 00:19:54.586 Firmware Update Granularity: No Information Provided 00:19:54.586 Per-Namespace SMART Log: No 00:19:54.586 Asymmetric Namespace Access Log Page: Not Supported 00:19:54.586 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:54.586 Command Effects Log Page: Supported 00:19:54.586 Get Log Page Extended Data: Supported 00:19:54.586 Telemetry Log Pages: Not Supported 00:19:54.586 Persistent Event Log Pages: Not Supported 00:19:54.586 Supported Log Pages Log Page: May Support 00:19:54.586 Commands Supported & Effects Log Page: Not Supported 00:19:54.586 Feature Identifiers & Effects Log Page:May Support 00:19:54.586 NVMe-MI Commands & Effects Log Page: May Support 00:19:54.586 Data Area 4 for Telemetry Log: Not Supported 00:19:54.586 Error Log Page Entries Supported: 128 00:19:54.586 Keep Alive: Supported 00:19:54.586 Keep Alive Granularity: 10000 ms 00:19:54.586 00:19:54.586 NVM Command Set Attributes 00:19:54.586 ========================== 00:19:54.586 Submission Queue Entry Size 00:19:54.586 Max: 64 00:19:54.586 Min: 64 00:19:54.586 Completion Queue Entry Size 00:19:54.586 Max: 16 00:19:54.586 Min: 16 00:19:54.586 Number of Namespaces: 32 00:19:54.586 Compare Command: Supported 00:19:54.586 Write Uncorrectable Command: Not Supported 00:19:54.586 Dataset Management Command: Supported 00:19:54.586 Write Zeroes Command: Supported 00:19:54.586 Set Features Save Field: Not Supported 00:19:54.586 Reservations: Supported 00:19:54.586 Timestamp: Not Supported 00:19:54.586 Copy: Supported 00:19:54.586 Volatile Write Cache: Present 00:19:54.586 Atomic Write Unit (Normal): 1 00:19:54.586 Atomic Write Unit (PFail): 1 00:19:54.586 Atomic Compare & Write Unit: 1 00:19:54.586 Fused Compare & Write: Supported 00:19:54.586 Scatter-Gather List 00:19:54.586 SGL Command Set: Supported 00:19:54.586 SGL Keyed: Supported 00:19:54.586 SGL Bit Bucket Descriptor: Not Supported 00:19:54.586 SGL Metadata Pointer: Not Supported 00:19:54.586 Oversized SGL: Not Supported 00:19:54.586 SGL Metadata Address: Not Supported 00:19:54.586 SGL Offset: Supported 00:19:54.586 Transport SGL Data Block: Not Supported 00:19:54.586 Replay Protected Memory Block: Not Supported 00:19:54.586 00:19:54.586 Firmware Slot Information 00:19:54.586 ========================= 00:19:54.586 Active slot: 1 00:19:54.586 Slot 1 Firmware Revision: 24.09 00:19:54.586 00:19:54.586 00:19:54.586 Commands Supported and Effects 00:19:54.586 ============================== 00:19:54.586 Admin Commands 00:19:54.586 -------------- 00:19:54.586 Get Log Page (02h): Supported 00:19:54.586 Identify (06h): Supported 00:19:54.586 Abort (08h): Supported 00:19:54.586 Set Features (09h): Supported 00:19:54.586 Get Features (0Ah): Supported 00:19:54.586 Asynchronous Event Request (0Ch): Supported 00:19:54.586 Keep Alive (18h): Supported 00:19:54.586 I/O Commands 00:19:54.586 ------------ 00:19:54.586 Flush (00h): Supported LBA-Change 00:19:54.586 Write (01h): Supported LBA-Change 00:19:54.586 Read (02h): Supported 00:19:54.586 Compare (05h): Supported 00:19:54.586 Write Zeroes (08h): Supported LBA-Change 00:19:54.586 Dataset Management (09h): Supported LBA-Change 00:19:54.586 Copy (19h): Supported LBA-Change 00:19:54.586 00:19:54.586 Error Log 00:19:54.586 ========= 00:19:54.586 00:19:54.586 Arbitration 00:19:54.586 =========== 00:19:54.586 Arbitration Burst: 1 00:19:54.586 00:19:54.586 Power Management 00:19:54.586 ================ 00:19:54.586 Number of Power States: 1 00:19:54.586 Current Power State: Power State #0 00:19:54.586 Power State #0: 00:19:54.586 Max Power: 0.00 W 00:19:54.586 Non-Operational State: Operational 00:19:54.586 Entry Latency: Not Reported 00:19:54.586 Exit Latency: Not Reported 00:19:54.586 Relative Read Throughput: 0 00:19:54.586 Relative Read Latency: 0 00:19:54.586 Relative Write Throughput: 0 00:19:54.586 Relative Write Latency: 0 00:19:54.586 Idle Power: Not Reported 00:19:54.586 Active Power: Not Reported 00:19:54.586 Non-Operational Permissive Mode: Not Supported 00:19:54.586 00:19:54.586 Health Information 00:19:54.586 ================== 00:19:54.586 Critical Warnings: 00:19:54.586 Available Spare Space: OK 00:19:54.586 Temperature: OK 00:19:54.586 Device Reliability: OK 00:19:54.586 Read Only: No 00:19:54.586 Volatile Memory Backup: OK 00:19:54.586 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:54.586 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:54.586 Available Spare: 0% 00:19:54.587 Available Spare Threshold: 0% 00:19:54.587 Life Percentage Used:[2024-07-11 06:06:10.372186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.372200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:19:54.587 [2024-07-11 06:06:10.372216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.587 [2024-07-11 06:06:10.372255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:19:54.587 [2024-07-11 06:06:10.372914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.587 [2024-07-11 06:06:10.372940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.587 [2024-07-11 06:06:10.372949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.372958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.373037] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:54.587 [2024-07-11 06:06:10.373056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.373070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.587 [2024-07-11 06:06:10.373080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.373089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.587 [2024-07-11 06:06:10.373097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.373117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.587 [2024-07-11 06:06:10.373126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.373135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.587 [2024-07-11 06:06:10.373150] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.373159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.373167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.587 [2024-07-11 06:06:10.373187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.587 [2024-07-11 06:06:10.373228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.587 [2024-07-11 06:06:10.373569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.587 [2024-07-11 06:06:10.373595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.587 [2024-07-11 06:06:10.373604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.373612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.373632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.377815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.377830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.587 [2024-07-11 06:06:10.377853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.587 [2024-07-11 06:06:10.377899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.587 [2024-07-11 06:06:10.378276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.587 [2024-07-11 06:06:10.378299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.587 [2024-07-11 06:06:10.378308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.378315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.378326] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:54.587 [2024-07-11 06:06:10.378335] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:54.587 [2024-07-11 06:06:10.378355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.378364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.378372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.587 [2024-07-11 06:06:10.378391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.587 [2024-07-11 06:06:10.378422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.587 [2024-07-11 06:06:10.378776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.587 [2024-07-11 06:06:10.378800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.587 [2024-07-11 06:06:10.378812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.378820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.378840] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.378849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.378856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.587 [2024-07-11 06:06:10.378870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.587 [2024-07-11 06:06:10.378899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.587 [2024-07-11 06:06:10.379272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.587 [2024-07-11 06:06:10.379297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.587 [2024-07-11 06:06:10.379306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.379313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.379333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.379345] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.379353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.587 [2024-07-11 06:06:10.379366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.587 [2024-07-11 06:06:10.379393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.587 [2024-07-11 06:06:10.379699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.587 [2024-07-11 06:06:10.379721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.587 [2024-07-11 06:06:10.379729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.379736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.379756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.379769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.379777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.587 [2024-07-11 06:06:10.379792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.587 [2024-07-11 06:06:10.379821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.587 [2024-07-11 06:06:10.380104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.587 [2024-07-11 06:06:10.380125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.587 [2024-07-11 06:06:10.380133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.380141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.380159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.380172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.380179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.587 [2024-07-11 06:06:10.380192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.587 [2024-07-11 06:06:10.380219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.587 [2024-07-11 06:06:10.380543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.587 [2024-07-11 06:06:10.380564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.587 [2024-07-11 06:06:10.380572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.380583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.587 [2024-07-11 06:06:10.380603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.380611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.587 [2024-07-11 06:06:10.380618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.588 [2024-07-11 06:06:10.380631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.588 [2024-07-11 06:06:10.380674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.588 [2024-07-11 06:06:10.380968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.588 [2024-07-11 06:06:10.380991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.588 [2024-07-11 06:06:10.381002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.588 [2024-07-11 06:06:10.381010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.588 [2024-07-11 06:06:10.381029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.588 [2024-07-11 06:06:10.381038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.588 [2024-07-11 06:06:10.381045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.588 [2024-07-11 06:06:10.381058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.588 [2024-07-11 06:06:10.381086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.588 [2024-07-11 06:06:10.381412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.588 [2024-07-11 06:06:10.381433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.588 [2024-07-11 06:06:10.381441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.588 [2024-07-11 06:06:10.381449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.588 [2024-07-11 06:06:10.381467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:54.588 [2024-07-11 06:06:10.381476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:54.588 [2024-07-11 06:06:10.381483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:19:54.588 [2024-07-11 06:06:10.381496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:54.588 [2024-07-11 06:06:10.381523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:19:54.588 [2024-07-11 06:06:10.385782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:54.588 [2024-07-11 06:06:10.385818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:54.588 [2024-07-11 06:06:10.385827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:54.588 [2024-07-11 06:06:10.385835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:19:54.588 [2024-07-11 06:06:10.385853] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:19:54.588 0% 00:19:54.588 Data Units Read: 0 00:19:54.588 Data Units Written: 0 00:19:54.588 Host Read Commands: 0 00:19:54.588 Host Write Commands: 0 00:19:54.588 Controller Busy Time: 0 minutes 00:19:54.588 Power Cycles: 0 00:19:54.588 Power On Hours: 0 hours 00:19:54.588 Unsafe Shutdowns: 0 00:19:54.588 Unrecoverable Media Errors: 0 00:19:54.588 Lifetime Error Log Entries: 0 00:19:54.588 Warning Temperature Time: 0 minutes 00:19:54.588 Critical Temperature Time: 0 minutes 00:19:54.588 00:19:54.588 Number of Queues 00:19:54.588 ================ 00:19:54.588 Number of I/O Submission Queues: 127 00:19:54.588 Number of I/O Completion Queues: 127 00:19:54.588 00:19:54.588 Active Namespaces 00:19:54.588 ================= 00:19:54.588 Namespace ID:1 00:19:54.588 Error Recovery Timeout: Unlimited 00:19:54.588 Command Set Identifier: NVM (00h) 00:19:54.588 Deallocate: Supported 00:19:54.588 Deallocated/Unwritten Error: Not Supported 00:19:54.588 Deallocated Read Value: Unknown 00:19:54.588 Deallocate in Write Zeroes: Not Supported 00:19:54.588 Deallocated Guard Field: 0xFFFF 00:19:54.588 Flush: Supported 00:19:54.588 Reservation: Supported 00:19:54.588 Namespace Sharing Capabilities: Multiple Controllers 00:19:54.588 Size (in LBAs): 131072 (0GiB) 00:19:54.588 Capacity (in LBAs): 131072 (0GiB) 00:19:54.588 Utilization (in LBAs): 131072 (0GiB) 00:19:54.588 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:54.588 EUI64: ABCDEF0123456789 00:19:54.588 UUID: d5be916c-bb66-4711-87c3-947f9c0516f3 00:19:54.588 Thin Provisioning: Not Supported 00:19:54.588 Per-NS Atomic Units: Yes 00:19:54.588 Atomic Boundary Size (Normal): 0 00:19:54.588 Atomic Boundary Size (PFail): 0 00:19:54.588 Atomic Boundary Offset: 0 00:19:54.588 Maximum Single Source Range Length: 65535 00:19:54.588 Maximum Copy Length: 65535 00:19:54.588 Maximum Source Range Count: 1 00:19:54.588 NGUID/EUI64 Never Reused: No 00:19:54.588 Namespace Write Protected: No 00:19:54.588 Number of LBA Formats: 1 00:19:54.588 Current LBA Format: LBA Format #00 00:19:54.588 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:54.588 00:19:54.588 06:06:10 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:54.588 06:06:10 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:54.588 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.588 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:54.588 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.588 06:06:10 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:54.588 06:06:10 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:54.588 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:54.588 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:54.847 rmmod nvme_tcp 00:19:54.847 rmmod nvme_fabrics 00:19:54.847 rmmod nvme_keyring 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 79939 ']' 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 79939 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 79939 ']' 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 79939 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79939 00:19:54.847 killing process with pid 79939 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79939' 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 79939 00:19:54.847 06:06:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 79939 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:56.225 ************************************ 00:19:56.225 END TEST nvmf_identify 00:19:56.225 ************************************ 00:19:56.225 00:19:56.225 real 0m3.988s 00:19:56.225 user 0m10.748s 00:19:56.225 sys 0m0.832s 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.225 06:06:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:56.225 06:06:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:56.225 06:06:12 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:56.225 06:06:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:56.225 06:06:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.225 06:06:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.225 ************************************ 00:19:56.225 START TEST nvmf_perf 00:19:56.225 ************************************ 00:19:56.225 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:56.484 * Looking for test storage... 00:19:56.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:56.484 Cannot find device "nvmf_tgt_br" 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.484 Cannot find device "nvmf_tgt_br2" 00:19:56.484 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:56.485 Cannot find device "nvmf_tgt_br" 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:56.485 Cannot find device "nvmf_tgt_br2" 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:56.485 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:56.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:19:56.744 00:19:56.744 --- 10.0.0.2 ping statistics --- 00:19:56.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.744 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:56.744 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.744 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:19:56.744 00:19:56.744 --- 10.0.0.3 ping statistics --- 00:19:56.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.744 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:56.744 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:56.744 00:19:56.744 --- 10.0.0.1 ping statistics --- 00:19:56.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.745 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=80164 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 80164 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 80164 ']' 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.745 06:06:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:57.005 [2024-07-11 06:06:12.717994] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:19:57.005 [2024-07-11 06:06:12.718221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.005 [2024-07-11 06:06:12.901147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.265 [2024-07-11 06:06:13.153167] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.265 [2024-07-11 06:06:13.153270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.265 [2024-07-11 06:06:13.153291] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.265 [2024-07-11 06:06:13.153307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.265 [2024-07-11 06:06:13.153321] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.265 [2024-07-11 06:06:13.153825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.265 [2024-07-11 06:06:13.153974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.265 [2024-07-11 06:06:13.154107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.265 [2024-07-11 06:06:13.154117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.525 [2024-07-11 06:06:13.361378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:57.800 06:06:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.800 06:06:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:19:57.800 06:06:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.800 06:06:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.800 06:06:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:57.800 06:06:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.800 06:06:13 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:57.800 06:06:13 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:58.366 06:06:14 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:58.366 06:06:14 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:58.624 06:06:14 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:19:58.624 06:06:14 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:58.883 06:06:14 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:58.883 06:06:14 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:19:58.883 06:06:14 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:58.883 06:06:14 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:58.883 06:06:14 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:59.141 [2024-07-11 06:06:14.915770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.141 06:06:14 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:59.399 06:06:15 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:59.399 06:06:15 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:59.657 06:06:15 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:59.657 06:06:15 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:59.915 06:06:15 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.173 [2024-07-11 06:06:15.941449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.173 06:06:15 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:00.431 06:06:16 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:00.431 06:06:16 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:00.431 06:06:16 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:00.431 06:06:16 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:01.804 Initializing NVMe Controllers 00:20:01.804 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:01.804 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:01.804 Initialization complete. Launching workers. 00:20:01.804 ======================================================== 00:20:01.804 Latency(us) 00:20:01.804 Device Information : IOPS MiB/s Average min max 00:20:01.804 PCIE (0000:00:10.0) NSID 1 from core 0: 23096.96 90.22 1384.26 326.24 8905.69 00:20:01.804 ======================================================== 00:20:01.804 Total : 23096.96 90.22 1384.26 326.24 8905.69 00:20:01.804 00:20:01.804 06:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:03.191 Initializing NVMe Controllers 00:20:03.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:03.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:03.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:03.191 Initialization complete. Launching workers. 00:20:03.191 ======================================================== 00:20:03.191 Latency(us) 00:20:03.191 Device Information : IOPS MiB/s Average min max 00:20:03.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2392.93 9.35 417.23 158.83 4459.09 00:20:03.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8127.00 7734.04 12019.92 00:20:03.191 ======================================================== 00:20:03.191 Total : 2516.92 9.83 797.05 158.83 12019.92 00:20:03.191 00:20:03.191 06:06:18 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:04.628 Initializing NVMe Controllers 00:20:04.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:04.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:04.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:04.628 Initialization complete. Launching workers. 00:20:04.628 ======================================================== 00:20:04.628 Latency(us) 00:20:04.628 Device Information : IOPS MiB/s Average min max 00:20:04.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7039.13 27.50 4545.69 656.83 12361.80 00:20:04.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3749.29 14.65 8576.49 5320.36 16628.80 00:20:04.628 ======================================================== 00:20:04.628 Total : 10788.42 42.14 5946.51 656.83 16628.80 00:20:04.628 00:20:04.628 06:06:20 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:04.628 06:06:20 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:07.910 Initializing NVMe Controllers 00:20:07.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:07.910 Controller IO queue size 128, less than required. 00:20:07.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:07.910 Controller IO queue size 128, less than required. 00:20:07.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:07.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:07.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:07.910 Initialization complete. Launching workers. 00:20:07.910 ======================================================== 00:20:07.910 Latency(us) 00:20:07.910 Device Information : IOPS MiB/s Average min max 00:20:07.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1446.71 361.68 90777.43 46078.11 237402.76 00:20:07.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 612.61 153.15 233282.59 93528.79 538809.04 00:20:07.910 ======================================================== 00:20:07.910 Total : 2059.32 514.83 133169.95 46078.11 538809.04 00:20:07.910 00:20:07.910 06:06:23 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:07.910 Initializing NVMe Controllers 00:20:07.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:07.910 Controller IO queue size 128, less than required. 00:20:07.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:07.910 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:07.910 Controller IO queue size 128, less than required. 00:20:07.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:07.910 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:07.910 WARNING: Some requested NVMe devices were skipped 00:20:07.910 No valid NVMe controllers or AIO or URING devices found 00:20:07.910 06:06:23 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:11.194 Initializing NVMe Controllers 00:20:11.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.194 Controller IO queue size 128, less than required. 00:20:11.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:11.194 Controller IO queue size 128, less than required. 00:20:11.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:11.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:11.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:11.194 Initialization complete. Launching workers. 00:20:11.194 00:20:11.194 ==================== 00:20:11.194 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:11.194 TCP transport: 00:20:11.194 polls: 6419 00:20:11.194 idle_polls: 3596 00:20:11.194 sock_completions: 2823 00:20:11.194 nvme_completions: 5317 00:20:11.194 submitted_requests: 7926 00:20:11.194 queued_requests: 1 00:20:11.194 00:20:11.194 ==================== 00:20:11.194 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:11.194 TCP transport: 00:20:11.194 polls: 7109 00:20:11.194 idle_polls: 3378 00:20:11.194 sock_completions: 3731 00:20:11.194 nvme_completions: 5917 00:20:11.194 submitted_requests: 8872 00:20:11.194 queued_requests: 1 00:20:11.194 ======================================================== 00:20:11.195 Latency(us) 00:20:11.195 Device Information : IOPS MiB/s Average min max 00:20:11.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1328.95 332.24 101672.89 52931.08 406795.08 00:20:11.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1478.94 369.73 87355.33 50072.52 298198.22 00:20:11.195 ======================================================== 00:20:11.195 Total : 2807.88 701.97 94131.69 50072.52 406795.08 00:20:11.195 00:20:11.195 06:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:11.195 06:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.195 06:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:11.195 06:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:20:11.195 06:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:11.453 06:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=f60d9e13-b276-483f-bff4-fa6b988bb6dd 00:20:11.453 06:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb f60d9e13-b276-483f-bff4-fa6b988bb6dd 00:20:11.453 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=f60d9e13-b276-483f-bff4-fa6b988bb6dd 00:20:11.453 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:20:11.453 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:20:11.453 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:20:11.453 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:11.712 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:20:11.712 { 00:20:11.712 "uuid": "f60d9e13-b276-483f-bff4-fa6b988bb6dd", 00:20:11.712 "name": "lvs_0", 00:20:11.712 "base_bdev": "Nvme0n1", 00:20:11.712 "total_data_clusters": 1278, 00:20:11.712 "free_clusters": 1278, 00:20:11.712 "block_size": 4096, 00:20:11.712 "cluster_size": 4194304 00:20:11.712 } 00:20:11.712 ]' 00:20:11.712 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f60d9e13-b276-483f-bff4-fa6b988bb6dd") .free_clusters' 00:20:11.712 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:20:11.712 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f60d9e13-b276-483f-bff4-fa6b988bb6dd") .cluster_size' 00:20:11.712 5112 00:20:11.712 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:20:11.712 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:20:11.712 06:06:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:20:11.712 06:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:11.712 06:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f60d9e13-b276-483f-bff4-fa6b988bb6dd lbd_0 5112 00:20:11.971 06:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=8f48c530-010b-4b12-a16c-f4e08d93637b 00:20:11.971 06:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8f48c530-010b-4b12-a16c-f4e08d93637b lvs_n_0 00:20:12.539 06:06:28 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=ec12e937-e0f2-4230-89ac-a7aca75f54f5 00:20:12.539 06:06:28 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb ec12e937-e0f2-4230-89ac-a7aca75f54f5 00:20:12.539 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=ec12e937-e0f2-4230-89ac-a7aca75f54f5 00:20:12.539 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:20:12.539 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:20:12.539 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:20:12.539 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:12.797 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:20:12.797 { 00:20:12.797 "uuid": "f60d9e13-b276-483f-bff4-fa6b988bb6dd", 00:20:12.797 "name": "lvs_0", 00:20:12.797 "base_bdev": "Nvme0n1", 00:20:12.797 "total_data_clusters": 1278, 00:20:12.797 "free_clusters": 0, 00:20:12.797 "block_size": 4096, 00:20:12.797 "cluster_size": 4194304 00:20:12.797 }, 00:20:12.797 { 00:20:12.797 "uuid": "ec12e937-e0f2-4230-89ac-a7aca75f54f5", 00:20:12.797 "name": "lvs_n_0", 00:20:12.797 "base_bdev": "8f48c530-010b-4b12-a16c-f4e08d93637b", 00:20:12.797 "total_data_clusters": 1276, 00:20:12.797 "free_clusters": 1276, 00:20:12.797 "block_size": 4096, 00:20:12.797 "cluster_size": 4194304 00:20:12.797 } 00:20:12.797 ]' 00:20:12.797 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ec12e937-e0f2-4230-89ac-a7aca75f54f5") .free_clusters' 00:20:12.797 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:20:12.797 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ec12e937-e0f2-4230-89ac-a7aca75f54f5") .cluster_size' 00:20:12.797 5104 00:20:12.797 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:20:12.797 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:20:12.798 06:06:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:20:12.798 06:06:28 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:12.798 06:06:28 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ec12e937-e0f2-4230-89ac-a7aca75f54f5 lbd_nest_0 5104 00:20:13.056 06:06:28 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=254f9718-119b-4fa3-8c6a-0186882e374a 00:20:13.056 06:06:28 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:13.315 06:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:13.315 06:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 254f9718-119b-4fa3-8c6a-0186882e374a 00:20:13.574 06:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.832 06:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:13.832 06:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:13.832 06:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:13.833 06:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:13.833 06:06:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:14.400 Initializing NVMe Controllers 00:20:14.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.400 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:14.400 WARNING: Some requested NVMe devices were skipped 00:20:14.400 No valid NVMe controllers or AIO or URING devices found 00:20:14.400 06:06:30 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:14.400 06:06:30 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:26.602 Initializing NVMe Controllers 00:20:26.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:26.602 Initialization complete. Launching workers. 00:20:26.602 ======================================================== 00:20:26.602 Latency(us) 00:20:26.602 Device Information : IOPS MiB/s Average min max 00:20:26.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 821.07 102.63 1216.72 413.49 8540.85 00:20:26.602 ======================================================== 00:20:26.602 Total : 821.07 102.63 1216.72 413.49 8540.85 00:20:26.602 00:20:26.602 06:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:26.602 06:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:26.602 06:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:26.602 Initializing NVMe Controllers 00:20:26.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.602 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:26.602 WARNING: Some requested NVMe devices were skipped 00:20:26.602 No valid NVMe controllers or AIO or URING devices found 00:20:26.602 06:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:26.602 06:06:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.570 Initializing NVMe Controllers 00:20:36.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.570 Initialization complete. Launching workers. 00:20:36.570 ======================================================== 00:20:36.570 Latency(us) 00:20:36.570 Device Information : IOPS MiB/s Average min max 00:20:36.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1312.90 164.11 24412.93 7767.62 59788.66 00:20:36.570 ======================================================== 00:20:36.570 Total : 1312.90 164.11 24412.93 7767.62 59788.66 00:20:36.570 00:20:36.570 06:06:51 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:36.570 06:06:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:36.570 06:06:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.570 Initializing NVMe Controllers 00:20:36.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.570 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:36.570 WARNING: Some requested NVMe devices were skipped 00:20:36.570 No valid NVMe controllers or AIO or URING devices found 00:20:36.570 06:06:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:36.570 06:06:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.539 Initializing NVMe Controllers 00:20:46.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.539 Controller IO queue size 128, less than required. 00:20:46.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:46.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:46.539 Initialization complete. Launching workers. 00:20:46.539 ======================================================== 00:20:46.539 Latency(us) 00:20:46.539 Device Information : IOPS MiB/s Average min max 00:20:46.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3368.69 421.09 38021.65 15651.20 85831.89 00:20:46.539 ======================================================== 00:20:46.539 Total : 3368.69 421.09 38021.65 15651.20 85831.89 00:20:46.539 00:20:46.539 06:07:02 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.797 06:07:02 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 254f9718-119b-4fa3-8c6a-0186882e374a 00:20:47.056 06:07:02 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:47.313 06:07:03 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8f48c530-010b-4b12-a16c-f4e08d93637b 00:20:47.571 06:07:03 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:47.833 rmmod nvme_tcp 00:20:47.833 rmmod nvme_fabrics 00:20:47.833 rmmod nvme_keyring 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 80164 ']' 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 80164 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 80164 ']' 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 80164 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80164 00:20:47.833 killing process with pid 80164 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80164' 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 80164 00:20:47.833 06:07:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 80164 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:50.373 00:20:50.373 real 0m53.813s 00:20:50.373 user 3m21.324s 00:20:50.373 sys 0m13.177s 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.373 06:07:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:50.373 ************************************ 00:20:50.373 END TEST nvmf_perf 00:20:50.373 ************************************ 00:20:50.373 06:07:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:50.373 06:07:05 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:50.373 06:07:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:50.373 06:07:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.373 06:07:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:50.373 ************************************ 00:20:50.373 START TEST nvmf_fio_host 00:20:50.373 ************************************ 00:20:50.373 06:07:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:50.373 * Looking for test storage... 00:20:50.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:50.373 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:50.374 Cannot find device "nvmf_tgt_br" 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.374 Cannot find device "nvmf_tgt_br2" 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:50.374 Cannot find device "nvmf_tgt_br" 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:50.374 Cannot find device "nvmf_tgt_br2" 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:50.374 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:50.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:20:50.633 00:20:50.633 --- 10.0.0.2 ping statistics --- 00:20:50.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.633 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:50.633 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:50.633 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:20:50.633 00:20:50.633 --- 10.0.0.3 ping statistics --- 00:20:50.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.633 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:50.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:20:50.633 00:20:50.633 --- 10.0.0.1 ping statistics --- 00:20:50.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.633 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=81011 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 81011 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 81011 ']' 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.633 06:07:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.633 [2024-07-11 06:07:06.541097] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:20:50.633 [2024-07-11 06:07:06.541243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.892 [2024-07-11 06:07:06.707132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.150 [2024-07-11 06:07:06.947540] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.150 [2024-07-11 06:07:06.947602] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.150 [2024-07-11 06:07:06.947620] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.150 [2024-07-11 06:07:06.947634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.150 [2024-07-11 06:07:06.947667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.150 [2024-07-11 06:07:06.947845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.150 [2024-07-11 06:07:06.948398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.150 [2024-07-11 06:07:06.948521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.150 [2024-07-11 06:07:06.948538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.408 [2024-07-11 06:07:07.132334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:51.666 06:07:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.666 06:07:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:51.667 06:07:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:51.925 [2024-07-11 06:07:07.725759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.925 06:07:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:51.925 06:07:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.925 06:07:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.925 06:07:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:52.495 Malloc1 00:20:52.495 06:07:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:52.753 06:07:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:52.753 06:07:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.012 [2024-07-11 06:07:08.856411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.012 06:07:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:53.270 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:53.271 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:53.271 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:53.271 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:53.271 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:53.271 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:20:53.271 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:53.271 06:07:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:53.530 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:53.530 fio-3.35 00:20:53.530 Starting 1 thread 00:20:56.065 00:20:56.065 test: (groupid=0, jobs=1): err= 0: pid=81082: Thu Jul 11 06:07:11 2024 00:20:56.065 read: IOPS=7600, BW=29.7MiB/s (31.1MB/s)(59.6MiB/2008msec) 00:20:56.065 slat (usec): min=2, max=219, avg= 3.28, stdev= 2.65 00:20:56.065 clat (usec): min=2013, max=15111, avg=8723.14, stdev=734.60 00:20:56.065 lat (usec): min=2049, max=15114, avg=8726.42, stdev=734.54 00:20:56.065 clat percentiles (usec): 00:20:56.065 | 1.00th=[ 7308], 5.00th=[ 7701], 10.00th=[ 7898], 20.00th=[ 8160], 00:20:56.065 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:56.065 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[ 9896], 00:20:56.065 | 99.00th=[10552], 99.50th=[11469], 99.90th=[13042], 99.95th=[14091], 00:20:56.065 | 99.99th=[15008] 00:20:56.065 bw ( KiB/s): min=27856, max=31632, per=99.95%, avg=30384.00, stdev=1766.29, samples=4 00:20:56.065 iops : min= 6964, max= 7908, avg=7596.00, stdev=441.57, samples=4 00:20:56.065 write: IOPS=7587, BW=29.6MiB/s (31.1MB/s)(59.5MiB/2008msec); 0 zone resets 00:20:56.065 slat (usec): min=2, max=181, avg= 3.47, stdev= 2.11 00:20:56.065 clat (usec): min=1700, max=14776, avg=8007.45, stdev=687.12 00:20:56.065 lat (usec): min=1711, max=14779, avg=8010.91, stdev=687.20 00:20:56.065 clat percentiles (usec): 00:20:56.065 | 1.00th=[ 6718], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7504], 00:20:56.065 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8160], 00:20:56.065 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:20:56.065 | 99.00th=[ 9634], 99.50th=[10814], 99.90th=[12780], 99.95th=[13173], 00:20:56.065 | 99.99th=[14746] 00:20:56.065 bw ( KiB/s): min=28744, max=31072, per=100.00%, avg=30354.00, stdev=1081.68, samples=4 00:20:56.065 iops : min= 7186, max= 7768, avg=7588.50, stdev=270.42, samples=4 00:20:56.065 lat (msec) : 2=0.01%, 4=0.12%, 10=97.64%, 20=2.24% 00:20:56.065 cpu : usr=67.86%, sys=23.27%, ctx=8, majf=0, minf=1539 00:20:56.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:56.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.065 issued rwts: total=15261,15235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.065 00:20:56.065 Run status group 0 (all jobs): 00:20:56.065 READ: bw=29.7MiB/s (31.1MB/s), 29.7MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=59.6MiB (62.5MB), run=2008-2008msec 00:20:56.065 WRITE: bw=29.6MiB/s (31.1MB/s), 29.6MiB/s-29.6MiB/s (31.1MB/s-31.1MB/s), io=59.5MiB (62.4MB), run=2008-2008msec 00:20:56.065 ----------------------------------------------------- 00:20:56.065 Suppressions used: 00:20:56.065 count bytes template 00:20:56.065 1 57 /usr/src/fio/parse.c 00:20:56.065 1 8 libtcmalloc_minimal.so 00:20:56.065 ----------------------------------------------------- 00:20:56.065 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:56.065 06:07:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:56.325 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:56.325 fio-3.35 00:20:56.325 Starting 1 thread 00:20:58.861 00:20:58.861 test: (groupid=0, jobs=1): err= 0: pid=81125: Thu Jul 11 06:07:14 2024 00:20:58.861 read: IOPS=6993, BW=109MiB/s (115MB/s)(220MiB/2010msec) 00:20:58.861 slat (usec): min=3, max=160, avg= 4.85, stdev= 2.94 00:20:58.861 clat (usec): min=2575, max=20857, avg=10295.61, stdev=3039.22 00:20:58.861 lat (usec): min=2579, max=20861, avg=10300.46, stdev=3039.28 00:20:58.861 clat percentiles (usec): 00:20:58.861 | 1.00th=[ 4948], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7439], 00:20:58.861 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11076], 00:20:58.861 | 70.00th=[11863], 80.00th=[12911], 90.00th=[14353], 95.00th=[15664], 00:20:58.861 | 99.00th=[17957], 99.50th=[18744], 99.90th=[20317], 99.95th=[20579], 00:20:58.861 | 99.99th=[20841] 00:20:58.861 bw ( KiB/s): min=52032, max=60256, per=50.16%, avg=56120.00, stdev=4159.95, samples=4 00:20:58.861 iops : min= 3252, max= 3766, avg=3507.50, stdev=260.00, samples=4 00:20:58.861 write: IOPS=3996, BW=62.4MiB/s (65.5MB/s)(115MiB/1834msec); 0 zone resets 00:20:58.861 slat (usec): min=34, max=234, avg=40.85, stdev= 9.47 00:20:58.861 clat (usec): min=4575, max=25404, avg=14336.57, stdev=2683.34 00:20:58.861 lat (usec): min=4609, max=25443, avg=14377.43, stdev=2684.20 00:20:58.861 clat percentiles (usec): 00:20:58.861 | 1.00th=[ 9372], 5.00th=[10683], 10.00th=[11207], 20.00th=[11994], 00:20:58.861 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13960], 60.00th=[14746], 00:20:58.861 | 70.00th=[15664], 80.00th=[16450], 90.00th=[17957], 95.00th=[19268], 00:20:58.861 | 99.00th=[21890], 99.50th=[22414], 99.90th=[24511], 99.95th=[25035], 00:20:58.861 | 99.99th=[25297] 00:20:58.861 bw ( KiB/s): min=54528, max=62560, per=91.35%, avg=58408.00, stdev=3992.99, samples=4 00:20:58.861 iops : min= 3408, max= 3910, avg=3650.50, stdev=249.56, samples=4 00:20:58.861 lat (msec) : 4=0.14%, 10=32.80%, 20=65.96%, 50=1.10% 00:20:58.861 cpu : usr=80.70%, sys=14.38%, ctx=12, majf=0, minf=2141 00:20:58.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:20:58.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.861 issued rwts: total=14056,7329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.861 00:20:58.861 Run status group 0 (all jobs): 00:20:58.861 READ: bw=109MiB/s (115MB/s), 109MiB/s-109MiB/s (115MB/s-115MB/s), io=220MiB (230MB), run=2010-2010msec 00:20:58.861 WRITE: bw=62.4MiB/s (65.5MB/s), 62.4MiB/s-62.4MiB/s (65.5MB/s-65.5MB/s), io=115MiB (120MB), run=1834-1834msec 00:20:58.861 ----------------------------------------------------- 00:20:58.861 Suppressions used: 00:20:58.861 count bytes template 00:20:58.861 1 57 /usr/src/fio/parse.c 00:20:58.861 306 29376 /usr/src/fio/iolog.c 00:20:58.861 1 8 libtcmalloc_minimal.so 00:20:58.861 ----------------------------------------------------- 00:20:58.861 00:20:58.861 06:07:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.120 06:07:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:20:59.120 06:07:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:20:59.120 06:07:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:20:59.120 06:07:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:59.120 06:07:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:20:59.120 06:07:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:59.120 06:07:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:59.120 06:07:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:59.120 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:20:59.120 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:59.120 06:07:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:20:59.688 Nvme0n1 00:20:59.688 06:07:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:59.958 06:07:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=a86e8954-956d-43d0-93e5-d063a09fdc14 00:20:59.958 06:07:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb a86e8954-956d-43d0-93e5-d063a09fdc14 00:20:59.958 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a86e8954-956d-43d0-93e5-d063a09fdc14 00:20:59.958 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:20:59.958 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:20:59.958 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:20:59.958 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:59.958 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:20:59.958 { 00:20:59.958 "uuid": "a86e8954-956d-43d0-93e5-d063a09fdc14", 00:20:59.958 "name": "lvs_0", 00:20:59.958 "base_bdev": "Nvme0n1", 00:20:59.958 "total_data_clusters": 4, 00:20:59.958 "free_clusters": 4, 00:20:59.958 "block_size": 4096, 00:20:59.958 "cluster_size": 1073741824 00:20:59.958 } 00:20:59.958 ]' 00:20:59.958 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a86e8954-956d-43d0-93e5-d063a09fdc14") .free_clusters' 00:21:00.313 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:21:00.313 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a86e8954-956d-43d0-93e5-d063a09fdc14") .cluster_size' 00:21:00.313 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:21:00.313 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:21:00.313 4096 00:21:00.313 06:07:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:21:00.313 06:07:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:00.313 9d6222d5-8289-4b32-9bba-a1b9782c9a4c 00:21:00.313 06:07:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:00.602 06:07:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:00.860 06:07:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:01.119 06:07:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:01.378 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:01.378 fio-3.35 00:21:01.378 Starting 1 thread 00:21:03.915 00:21:03.915 test: (groupid=0, jobs=1): err= 0: pid=81229: Thu Jul 11 06:07:19 2024 00:21:03.915 read: IOPS=5019, BW=19.6MiB/s (20.6MB/s)(39.4MiB/2011msec) 00:21:03.915 slat (usec): min=2, max=172, avg= 3.24, stdev= 3.03 00:21:03.915 clat (usec): min=3432, max=23584, avg=13272.55, stdev=1134.51 00:21:03.915 lat (usec): min=3442, max=23587, avg=13275.79, stdev=1134.23 00:21:03.915 clat percentiles (usec): 00:21:03.915 | 1.00th=[10945], 5.00th=[11731], 10.00th=[11994], 20.00th=[12387], 00:21:03.915 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:21:03.915 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14615], 95.00th=[15008], 00:21:03.915 | 99.00th=[15795], 99.50th=[16450], 99.90th=[21627], 99.95th=[22152], 00:21:03.915 | 99.99th=[23462] 00:21:03.915 bw ( KiB/s): min=19072, max=20496, per=99.91%, avg=20062.00, stdev=664.29, samples=4 00:21:03.915 iops : min= 4768, max= 5124, avg=5015.50, stdev=166.07, samples=4 00:21:03.915 write: IOPS=5016, BW=19.6MiB/s (20.5MB/s)(39.4MiB/2011msec); 0 zone resets 00:21:03.915 slat (usec): min=2, max=165, avg= 3.41, stdev= 2.59 00:21:03.915 clat (usec): min=2213, max=21661, avg=12042.76, stdev=1064.41 00:21:03.915 lat (usec): min=2231, max=21665, avg=12046.17, stdev=1064.24 00:21:03.915 clat percentiles (usec): 00:21:03.915 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:21:03.915 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:21:03.915 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:21:03.915 | 99.00th=[14353], 99.50th=[14746], 99.90th=[20317], 99.95th=[20579], 00:21:03.915 | 99.99th=[20841] 00:21:03.915 bw ( KiB/s): min=19904, max=20288, per=99.92%, avg=20050.00, stdev=181.77, samples=4 00:21:03.915 iops : min= 4976, max= 5072, avg=5012.50, stdev=45.44, samples=4 00:21:03.915 lat (msec) : 4=0.05%, 10=0.73%, 20=99.04%, 50=0.17% 00:21:03.915 cpu : usr=73.28%, sys=20.95%, ctx=5, majf=0, minf=1539 00:21:03.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:03.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:03.915 issued rwts: total=10095,10088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:03.915 00:21:03.915 Run status group 0 (all jobs): 00:21:03.915 READ: bw=19.6MiB/s (20.6MB/s), 19.6MiB/s-19.6MiB/s (20.6MB/s-20.6MB/s), io=39.4MiB (41.3MB), run=2011-2011msec 00:21:03.915 WRITE: bw=19.6MiB/s (20.5MB/s), 19.6MiB/s-19.6MiB/s (20.5MB/s-20.5MB/s), io=39.4MiB (41.3MB), run=2011-2011msec 00:21:03.915 ----------------------------------------------------- 00:21:03.915 Suppressions used: 00:21:03.915 count bytes template 00:21:03.915 1 58 /usr/src/fio/parse.c 00:21:03.915 1 8 libtcmalloc_minimal.so 00:21:03.915 ----------------------------------------------------- 00:21:03.915 00:21:03.915 06:07:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:04.173 06:07:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:04.432 06:07:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=fba92ba1-88b8-40cd-b13d-467a57dde0e1 00:21:04.432 06:07:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb fba92ba1-88b8-40cd-b13d-467a57dde0e1 00:21:04.432 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=fba92ba1-88b8-40cd-b13d-467a57dde0e1 00:21:04.432 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:04.432 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:21:04.432 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:21:04.432 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:04.690 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:04.690 { 00:21:04.690 "uuid": "a86e8954-956d-43d0-93e5-d063a09fdc14", 00:21:04.690 "name": "lvs_0", 00:21:04.690 "base_bdev": "Nvme0n1", 00:21:04.690 "total_data_clusters": 4, 00:21:04.690 "free_clusters": 0, 00:21:04.690 "block_size": 4096, 00:21:04.690 "cluster_size": 1073741824 00:21:04.690 }, 00:21:04.690 { 00:21:04.690 "uuid": "fba92ba1-88b8-40cd-b13d-467a57dde0e1", 00:21:04.690 "name": "lvs_n_0", 00:21:04.690 "base_bdev": "9d6222d5-8289-4b32-9bba-a1b9782c9a4c", 00:21:04.690 "total_data_clusters": 1022, 00:21:04.690 "free_clusters": 1022, 00:21:04.690 "block_size": 4096, 00:21:04.690 "cluster_size": 4194304 00:21:04.690 } 00:21:04.690 ]' 00:21:04.690 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="fba92ba1-88b8-40cd-b13d-467a57dde0e1") .free_clusters' 00:21:04.690 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:21:04.690 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="fba92ba1-88b8-40cd-b13d-467a57dde0e1") .cluster_size' 00:21:04.690 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:21:04.690 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:21:04.690 4088 00:21:04.690 06:07:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:21:04.690 06:07:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:04.949 ca18a857-8237-4782-8923-97bec04ba72b 00:21:04.949 06:07:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:05.206 06:07:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:05.463 06:07:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:05.722 06:07:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:05.979 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:05.980 fio-3.35 00:21:05.980 Starting 1 thread 00:21:08.512 00:21:08.512 test: (groupid=0, jobs=1): err= 0: pid=81301: Thu Jul 11 06:07:24 2024 00:21:08.512 read: IOPS=4531, BW=17.7MiB/s (18.6MB/s)(35.6MiB/2012msec) 00:21:08.512 slat (usec): min=2, max=211, avg= 3.49, stdev= 2.96 00:21:08.512 clat (usec): min=3730, max=25989, avg=14717.92, stdev=1268.58 00:21:08.512 lat (usec): min=3740, max=25992, avg=14721.41, stdev=1268.32 00:21:08.512 clat percentiles (usec): 00:21:08.512 | 1.00th=[12125], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:21:08.512 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[15008], 00:21:08.512 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:21:08.512 | 99.00th=[17433], 99.50th=[18220], 99.90th=[24511], 99.95th=[24773], 00:21:08.512 | 99.99th=[26084] 00:21:08.512 bw ( KiB/s): min=17216, max=18480, per=99.89%, avg=18108.00, stdev=597.33, samples=4 00:21:08.512 iops : min= 4304, max= 4620, avg=4527.00, stdev=149.33, samples=4 00:21:08.512 write: IOPS=4536, BW=17.7MiB/s (18.6MB/s)(35.7MiB/2012msec); 0 zone resets 00:21:08.512 slat (usec): min=3, max=153, avg= 3.70, stdev= 2.01 00:21:08.512 clat (usec): min=2311, max=24804, avg=13332.58, stdev=1231.25 00:21:08.512 lat (usec): min=2322, max=24808, avg=13336.28, stdev=1231.15 00:21:08.512 clat percentiles (usec): 00:21:08.512 | 1.00th=[10945], 5.00th=[11600], 10.00th=[11994], 20.00th=[12518], 00:21:08.512 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:21:08.513 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14615], 95.00th=[15008], 00:21:08.513 | 99.00th=[15926], 99.50th=[16909], 99.90th=[23200], 99.95th=[24511], 00:21:08.513 | 99.99th=[24773] 00:21:08.513 bw ( KiB/s): min=18048, max=18184, per=99.91%, avg=18130.00, stdev=63.46, samples=4 00:21:08.513 iops : min= 4512, max= 4546, avg=4532.50, stdev=15.86, samples=4 00:21:08.513 lat (msec) : 4=0.04%, 10=0.34%, 20=99.37%, 50=0.25% 00:21:08.513 cpu : usr=74.04%, sys=20.64%, ctx=4, majf=0, minf=1538 00:21:08.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:08.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:08.513 issued rwts: total=9118,9128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:08.513 00:21:08.513 Run status group 0 (all jobs): 00:21:08.513 READ: bw=17.7MiB/s (18.6MB/s), 17.7MiB/s-17.7MiB/s (18.6MB/s-18.6MB/s), io=35.6MiB (37.3MB), run=2012-2012msec 00:21:08.513 WRITE: bw=17.7MiB/s (18.6MB/s), 17.7MiB/s-17.7MiB/s (18.6MB/s-18.6MB/s), io=35.7MiB (37.4MB), run=2012-2012msec 00:21:08.513 ----------------------------------------------------- 00:21:08.513 Suppressions used: 00:21:08.513 count bytes template 00:21:08.513 1 58 /usr/src/fio/parse.c 00:21:08.513 1 8 libtcmalloc_minimal.so 00:21:08.513 ----------------------------------------------------- 00:21:08.513 00:21:08.513 06:07:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:08.771 06:07:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:21:08.771 06:07:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:09.030 06:07:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:09.289 06:07:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:09.548 06:07:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:09.818 06:07:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:10.077 rmmod nvme_tcp 00:21:10.077 rmmod nvme_fabrics 00:21:10.077 rmmod nvme_keyring 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 81011 ']' 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 81011 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 81011 ']' 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 81011 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81011 00:21:10.077 killing process with pid 81011 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81011' 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 81011 00:21:10.077 06:07:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 81011 00:21:11.452 06:07:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.452 06:07:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.452 06:07:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.452 06:07:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.452 06:07:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.452 06:07:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.452 06:07:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.452 06:07:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.711 06:07:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:11.711 00:21:11.711 real 0m21.432s 00:21:11.711 user 1m32.486s 00:21:11.711 sys 0m4.728s 00:21:11.711 06:07:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:11.711 ************************************ 00:21:11.711 END TEST nvmf_fio_host 00:21:11.711 06:07:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.711 ************************************ 00:21:11.711 06:07:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:11.711 06:07:27 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:11.711 06:07:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:11.711 06:07:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:11.711 06:07:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:11.711 ************************************ 00:21:11.711 START TEST nvmf_failover 00:21:11.711 ************************************ 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:11.711 * Looking for test storage... 00:21:11.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.711 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:11.712 Cannot find device "nvmf_tgt_br" 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:11.712 Cannot find device "nvmf_tgt_br2" 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:11.712 Cannot find device "nvmf_tgt_br" 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:11.712 Cannot find device "nvmf_tgt_br2" 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:21:11.712 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:11.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:11.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:11.971 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:11.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:21:11.972 00:21:11.972 --- 10.0.0.2 ping statistics --- 00:21:11.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.972 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:11.972 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:11.972 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:21:11.972 00:21:11.972 --- 10.0.0.3 ping statistics --- 00:21:11.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.972 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:11.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:21:11.972 00:21:11.972 --- 10.0.0.1 ping statistics --- 00:21:11.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.972 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=81554 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 81554 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 81554 ']' 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.972 06:07:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:12.230 [2024-07-11 06:07:27.998369] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:21:12.230 [2024-07-11 06:07:27.998550] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.490 [2024-07-11 06:07:28.176487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:12.748 [2024-07-11 06:07:28.412781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.748 [2024-07-11 06:07:28.412843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.748 [2024-07-11 06:07:28.412861] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.748 [2024-07-11 06:07:28.412876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.748 [2024-07-11 06:07:28.412887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.748 [2024-07-11 06:07:28.413075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.748 [2024-07-11 06:07:28.413624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.748 [2024-07-11 06:07:28.413672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.749 [2024-07-11 06:07:28.618622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:13.312 06:07:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.312 06:07:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:13.312 06:07:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.312 06:07:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.312 06:07:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:13.312 06:07:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.312 06:07:29 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:13.569 [2024-07-11 06:07:29.272991] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.569 06:07:29 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:13.825 Malloc0 00:21:13.826 06:07:29 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:14.083 06:07:29 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:14.341 06:07:30 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.599 [2024-07-11 06:07:30.380920] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.599 06:07:30 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:14.856 [2024-07-11 06:07:30.629195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:14.856 06:07:30 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:15.113 [2024-07-11 06:07:30.877403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:15.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.113 06:07:30 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:15.113 06:07:30 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81612 00:21:15.113 06:07:30 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.113 06:07:30 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81612 /var/tmp/bdevperf.sock 00:21:15.113 06:07:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 81612 ']' 00:21:15.113 06:07:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.114 06:07:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.114 06:07:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.114 06:07:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.114 06:07:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:16.064 06:07:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.064 06:07:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:16.064 06:07:31 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:16.629 NVMe0n1 00:21:16.629 06:07:32 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:16.887 00:21:16.887 06:07:32 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81640 00:21:16.887 06:07:32 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.887 06:07:32 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:17.821 06:07:33 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.079 06:07:33 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:21.365 06:07:36 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:21.365 00:21:21.365 06:07:37 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:21.649 06:07:37 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:24.934 06:07:40 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.934 [2024-07-11 06:07:40.806589] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.934 06:07:40 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:26.311 06:07:41 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:26.311 [2024-07-11 06:07:42.089957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:21:26.311 06:07:42 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 81640 00:21:32.873 0 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 81612 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 81612 ']' 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 81612 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81612 00:21:32.873 killing process with pid 81612 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81612' 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 81612 00:21:32.873 06:07:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 81612 00:21:33.139 06:07:48 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:33.139 [2024-07-11 06:07:30.983285] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:21:33.139 [2024-07-11 06:07:30.983448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81612 ] 00:21:33.139 [2024-07-11 06:07:31.148447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.139 [2024-07-11 06:07:31.353831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.139 [2024-07-11 06:07:31.543429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:33.139 Running I/O for 15 seconds... 00:21:33.139 [2024-07-11 06:07:33.910709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.139 [2024-07-11 06:07:33.910781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.910816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.139 [2024-07-11 06:07:33.910853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.910875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.139 [2024-07-11 06:07:33.910910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.910931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.139 [2024-07-11 06:07:33.910949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.910968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:21:33.139 [2024-07-11 06:07:33.911291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.139 [2024-07-11 06:07:33.911331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.911955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.911979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.912000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.912022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.912043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.912067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.912089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.912112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.912134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.912155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.912177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.912199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.912220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.912242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.912275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.912298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.912334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.912359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.912381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.912403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.139 [2024-07-11 06:07:33.912424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.139 [2024-07-11 06:07:33.912446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.912974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.912997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.913960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.913982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.914011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.914035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.914057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.914079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.914100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.914121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.914143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.914164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.914187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.914209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.914230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.914251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.914273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.914295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.914318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.914339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.140 [2024-07-11 06:07:33.914360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.140 [2024-07-11 06:07:33.914382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.914969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.914990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.915972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.915997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.916019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.916040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.916061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.916082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.916103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.916125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.916147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.916169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.916190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.916225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.916247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.916269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.141 [2024-07-11 06:07:33.916290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.141 [2024-07-11 06:07:33.916333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:33.916382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:33.916425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:33.916469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.916958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.916979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.917000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.917021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.917043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.917066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.917088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.917112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.917134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:33.917153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.917173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:33.917192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.917211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:21:33.142 [2024-07-11 06:07:33.917238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.142 [2024-07-11 06:07:33.917254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.142 [2024-07-11 06:07:33.917270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49312 len:8 PRP1 0x0 PRP2 0x0 00:21:33.142 [2024-07-11 06:07:33.917288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:33.917548] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:21:33.142 [2024-07-11 06:07:33.917577] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:33.142 [2024-07-11 06:07:33.917598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.142 [2024-07-11 06:07:33.921725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.142 [2024-07-11 06:07:33.921786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:21:33.142 [2024-07-11 06:07:33.961929] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:33.142 [2024-07-11 06:07:37.524444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:37.524517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.524593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:37.524622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.524660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:37.524684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.524705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:37.524723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.524745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:37.524762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.524792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:37.524810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.524830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:37.524848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.524869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.142 [2024-07-11 06:07:37.524886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.524907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:37.524925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.524947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:37.524965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.524985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:37.525003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.525024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:37.525042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.525062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:37.525080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.525100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:37.525166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.525192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:37.525211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.525232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:37.525251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.525271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:37.525290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.525310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.142 [2024-07-11 06:07:37.525328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.142 [2024-07-11 06:07:37.525348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.143 [2024-07-11 06:07:37.525905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.525944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.525964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.525983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.143 [2024-07-11 06:07:37.526591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.143 [2024-07-11 06:07:37.526612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.526630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.526650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.526680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.526704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.526723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.526752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.526771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.526792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.526811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.526831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.526850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.526870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.526888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.526909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.526932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.526952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.526971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.526991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.527009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.527048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.527087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.527126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.527164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.527203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.527248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.527976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.527996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.528014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.528034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.528053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.528074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.528092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.528112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.528130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.528150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.528169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.528189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.144 [2024-07-11 06:07:37.528207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.528227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.528245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.528273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.528292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.528324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.144 [2024-07-11 06:07:37.528345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.144 [2024-07-11 06:07:37.528365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.145 [2024-07-11 06:07:37.528383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.145 [2024-07-11 06:07:37.528422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.145 [2024-07-11 06:07:37.528460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.145 [2024-07-11 06:07:37.528498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.145 [2024-07-11 06:07:37.528536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.528575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.528613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.528684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.528723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.528769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.528817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.528857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.528896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.528934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.528973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.528994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.529012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.529051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.529090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.529129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.145 [2024-07-11 06:07:37.529168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(5) to be set 00:21:33.145 [2024-07-11 06:07:37.529209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125016 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125344 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125352 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125360 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125368 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125376 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125384 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125392 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125400 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125408 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125416 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.529946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.529960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.529974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125424 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.529991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.530008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.145 [2024-07-11 06:07:37.530022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.145 [2024-07-11 06:07:37.530036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125432 len:8 PRP1 0x0 PRP2 0x0 00:21:33.145 [2024-07-11 06:07:37.530054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.145 [2024-07-11 06:07:37.530070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.146 [2024-07-11 06:07:37.530084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.146 [2024-07-11 06:07:37.530098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125440 len:8 PRP1 0x0 PRP2 0x0 00:21:33.146 [2024-07-11 06:07:37.530115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:37.530135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.146 [2024-07-11 06:07:37.530149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.146 [2024-07-11 06:07:37.530163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125448 len:8 PRP1 0x0 PRP2 0x0 00:21:33.146 [2024-07-11 06:07:37.530181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:37.530198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.146 [2024-07-11 06:07:37.530215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.146 [2024-07-11 06:07:37.530229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125456 len:8 PRP1 0x0 PRP2 0x0 00:21:33.146 [2024-07-11 06:07:37.530247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:37.530264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.146 [2024-07-11 06:07:37.530277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.146 [2024-07-11 06:07:37.530292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125464 len:8 PRP1 0x0 PRP2 0x0 00:21:33.146 [2024-07-11 06:07:37.530309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:37.530573] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002ba00 was disconnected and freed. reset controller. 00:21:33.146 [2024-07-11 06:07:37.530601] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:33.146 [2024-07-11 06:07:37.530687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.146 [2024-07-11 06:07:37.530724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:37.530746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.146 [2024-07-11 06:07:37.530765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:37.530784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.146 [2024-07-11 06:07:37.530802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:37.530821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.146 [2024-07-11 06:07:37.530838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:37.530855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.146 [2024-07-11 06:07:37.530924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:21:33.146 [2024-07-11 06:07:37.535160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.146 [2024-07-11 06:07:37.572030] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:33.146 [2024-07-11 06:07:42.090543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.146 [2024-07-11 06:07:42.090612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.090669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.146 [2024-07-11 06:07:42.090694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.090719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.090739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.090760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.090780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.090800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.090820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.090841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.090861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.090908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.090929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.090950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.090969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.090990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.146 [2024-07-11 06:07:42.091461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.146 [2024-07-11 06:07:42.091502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.146 [2024-07-11 06:07:42.091541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.146 [2024-07-11 06:07:42.091580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.146 [2024-07-11 06:07:42.091619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.146 [2024-07-11 06:07:42.091658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.146 [2024-07-11 06:07:42.091715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.146 [2024-07-11 06:07:42.091754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.146 [2024-07-11 06:07:42.091891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.146 [2024-07-11 06:07:42.091910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.091930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.147 [2024-07-11 06:07:42.091949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.091982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.147 [2024-07-11 06:07:42.092059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.147 [2024-07-11 06:07:42.092102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.147 [2024-07-11 06:07:42.092145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.092964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.092984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.093003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.093023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.093041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.093061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.093088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.093109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.147 [2024-07-11 06:07:42.093128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.093149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.147 [2024-07-11 06:07:42.093167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.147 [2024-07-11 06:07:42.093187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.147 [2024-07-11 06:07:42.093206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.149 [2024-07-11 06:07:42.093246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.149 [2024-07-11 06:07:42.093285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.149 [2024-07-11 06:07:42.093324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.149 [2024-07-11 06:07:42.093363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.149 [2024-07-11 06:07:42.093401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.149 [2024-07-11 06:07:42.093441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.149 [2024-07-11 06:07:42.093480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.149 [2024-07-11 06:07:42.093520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.149 [2024-07-11 06:07:42.093559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.149 [2024-07-11 06:07:42.093607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.149 [2024-07-11 06:07:42.093659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.149 [2024-07-11 06:07:42.093701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.149 [2024-07-11 06:07:42.093740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.149 [2024-07-11 06:07:42.093795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.149 [2024-07-11 06:07:42.093817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.150 [2024-07-11 06:07:42.093835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.093856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.150 [2024-07-11 06:07:42.093874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.093895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.150 [2024-07-11 06:07:42.093914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.093934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.150 [2024-07-11 06:07:42.093953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.093973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.150 [2024-07-11 06:07:42.093991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.150 [2024-07-11 06:07:42.094029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.150 [2024-07-11 06:07:42.094068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.150 [2024-07-11 06:07:42.094132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.150 [2024-07-11 06:07:42.094979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.094999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.150 [2024-07-11 06:07:42.095017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.150 [2024-07-11 06:07:42.095038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.150 [2024-07-11 06:07:42.095056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.151 [2024-07-11 06:07:42.095095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.151 [2024-07-11 06:07:42.095134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.151 [2024-07-11 06:07:42.095181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.151 [2024-07-11 06:07:42.095222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.151 [2024-07-11 06:07:42.095261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(5) to be set 00:21:33.151 [2024-07-11 06:07:42.095304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.095334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124824 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.095352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.095400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125248 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.095418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.095462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125256 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.095480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.095524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125264 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.095542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.095587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125272 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.095605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.095681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125280 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.095699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.095759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125288 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.095777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.095821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125296 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.095839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.095884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125304 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.095901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.095946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125312 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.095963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.095980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.095993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.096008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125320 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.096025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.096042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.096055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.096069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125328 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.096087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.096104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.096117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.151 [2024-07-11 06:07:42.096131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125336 len:8 PRP1 0x0 PRP2 0x0 00:21:33.151 [2024-07-11 06:07:42.096148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.151 [2024-07-11 06:07:42.096169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.151 [2024-07-11 06:07:42.096183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.152 [2024-07-11 06:07:42.096197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125344 len:8 PRP1 0x0 PRP2 0x0 00:21:33.152 [2024-07-11 06:07:42.096221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.152 [2024-07-11 06:07:42.096254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.152 [2024-07-11 06:07:42.096270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.152 [2024-07-11 06:07:42.096285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125352 len:8 PRP1 0x0 PRP2 0x0 00:21:33.152 [2024-07-11 06:07:42.096302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.152 [2024-07-11 06:07:42.096331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.152 [2024-07-11 06:07:42.096345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.152 [2024-07-11 06:07:42.096359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125360 len:8 PRP1 0x0 PRP2 0x0 00:21:33.152 [2024-07-11 06:07:42.096377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.152 [2024-07-11 06:07:42.096395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.152 [2024-07-11 06:07:42.096418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.152 [2024-07-11 06:07:42.096444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125368 len:8 PRP1 0x0 PRP2 0x0 00:21:33.152 [2024-07-11 06:07:42.096464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.152 [2024-07-11 06:07:42.096486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.152 [2024-07-11 06:07:42.096511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.152 [2024-07-11 06:07:42.096537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125376 len:8 PRP1 0x0 PRP2 0x0 00:21:33.152 [2024-07-11 06:07:42.096557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.152 [2024-07-11 06:07:42.096575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.152 [2024-07-11 06:07:42.096589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.152 [2024-07-11 06:07:42.096603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125384 len:8 PRP1 0x0 PRP2 0x0 00:21:33.152 [2024-07-11 06:07:42.096620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.152 [2024-07-11 06:07:42.096892] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002c180 was disconnected and freed. reset controller. 00:21:33.152 [2024-07-11 06:07:42.096920] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:33.152 [2024-07-11 06:07:42.096990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.152 [2024-07-11 06:07:42.097017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.152 [2024-07-11 06:07:42.097039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.152 [2024-07-11 06:07:42.097057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.152 [2024-07-11 06:07:42.097075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.152 [2024-07-11 06:07:42.097093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.152 [2024-07-11 06:07:42.097128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.152 [2024-07-11 06:07:42.097147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.152 [2024-07-11 06:07:42.097165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.152 [2024-07-11 06:07:42.097233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:21:33.152 [2024-07-11 06:07:42.101276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.152 [2024-07-11 06:07:42.138475] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:33.152 00:21:33.152 Latency(us) 00:21:33.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.152 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:33.152 Verification LBA range: start 0x0 length 0x4000 00:21:33.152 NVMe0n1 : 15.01 6789.63 26.52 209.77 0.00 18248.30 767.07 20733.21 00:21:33.152 =================================================================================================================== 00:21:33.152 Total : 6789.63 26.52 209.77 0.00 18248.30 767.07 20733.21 00:21:33.152 Received shutdown signal, test time was about 15.000000 seconds 00:21:33.152 00:21:33.152 Latency(us) 00:21:33.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.152 =================================================================================================================== 00:21:33.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:33.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81819 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81819 /var/tmp/bdevperf.sock 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 81819 ']' 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.152 06:07:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:34.554 06:07:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.554 06:07:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:34.555 06:07:50 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:34.555 [2024-07-11 06:07:50.275938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:34.555 06:07:50 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:34.813 [2024-07-11 06:07:50.556302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:34.813 06:07:50 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.071 NVMe0n1 00:21:35.071 06:07:50 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.330 00:21:35.330 06:07:51 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.897 00:21:35.897 06:07:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:35.897 06:07:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:36.155 06:07:51 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.155 06:07:52 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:39.436 06:07:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.436 06:07:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:39.436 06:07:55 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=81896 00:21:39.436 06:07:55 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.436 06:07:55 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 81896 00:21:40.810 0 00:21:40.810 06:07:56 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:40.810 [2024-07-11 06:07:49.112805] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:21:40.810 [2024-07-11 06:07:49.112994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81819 ] 00:21:40.810 [2024-07-11 06:07:49.286619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.810 [2024-07-11 06:07:49.484792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.810 [2024-07-11 06:07:49.670723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:40.810 [2024-07-11 06:07:52.036153] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:40.810 [2024-07-11 06:07:52.036323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.810 [2024-07-11 06:07:52.036363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.810 [2024-07-11 06:07:52.036391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.810 [2024-07-11 06:07:52.036415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.810 [2024-07-11 06:07:52.036436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.810 [2024-07-11 06:07:52.036458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.810 [2024-07-11 06:07:52.036478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.810 [2024-07-11 06:07:52.036503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.810 [2024-07-11 06:07:52.036523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:40.810 [2024-07-11 06:07:52.036603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:40.810 [2024-07-11 06:07:52.036664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:21:40.810 [2024-07-11 06:07:52.043727] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:40.810 Running I/O for 1 seconds... 00:21:40.810 00:21:40.810 Latency(us) 00:21:40.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.810 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:40.810 Verification LBA range: start 0x0 length 0x4000 00:21:40.810 NVMe0n1 : 1.01 5238.10 20.46 0.00 0.00 24336.89 3083.17 20971.52 00:21:40.810 =================================================================================================================== 00:21:40.810 Total : 5238.10 20.46 0.00 0.00 24336.89 3083.17 20971.52 00:21:40.810 06:07:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.810 06:07:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:40.810 06:07:56 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.068 06:07:56 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:41.068 06:07:56 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.326 06:07:57 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.583 06:07:57 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:44.867 06:08:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 81819 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 81819 ']' 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 81819 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81819 00:21:44.868 killing process with pid 81819 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81819' 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 81819 00:21:44.868 06:08:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 81819 00:21:46.252 06:08:01 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:46.252 06:08:01 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.252 rmmod nvme_tcp 00:21:46.252 rmmod nvme_fabrics 00:21:46.252 rmmod nvme_keyring 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 81554 ']' 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 81554 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 81554 ']' 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 81554 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81554 00:21:46.252 killing process with pid 81554 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81554' 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 81554 00:21:46.252 06:08:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 81554 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:47.628 00:21:47.628 real 0m35.901s 00:21:47.628 user 2m17.645s 00:21:47.628 sys 0m5.577s 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.628 ************************************ 00:21:47.628 END TEST nvmf_failover 00:21:47.628 ************************************ 00:21:47.628 06:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:47.628 06:08:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:47.628 06:08:03 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:47.628 06:08:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:47.628 06:08:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.628 06:08:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.628 ************************************ 00:21:47.628 START TEST nvmf_host_discovery 00:21:47.628 ************************************ 00:21:47.628 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:47.628 * Looking for test storage... 00:21:47.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:47.629 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:47.888 Cannot find device "nvmf_tgt_br" 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.888 Cannot find device "nvmf_tgt_br2" 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:47.888 Cannot find device "nvmf_tgt_br" 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:47.888 Cannot find device "nvmf_tgt_br2" 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:47.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:47.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:47.888 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:48.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:21:48.147 00:21:48.147 --- 10.0.0.2 ping statistics --- 00:21:48.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.147 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:48.147 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:48.147 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:21:48.147 00:21:48.147 --- 10.0.0.3 ping statistics --- 00:21:48.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.147 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:48.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:21:48.147 00:21:48.147 --- 10.0.0.1 ping statistics --- 00:21:48.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.147 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=82178 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 82178 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 82178 ']' 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.147 06:08:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.147 [2024-07-11 06:08:04.014763] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:21:48.147 [2024-07-11 06:08:04.014916] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.407 [2024-07-11 06:08:04.195787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.666 [2024-07-11 06:08:04.415114] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.666 [2024-07-11 06:08:04.415189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.666 [2024-07-11 06:08:04.415206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.666 [2024-07-11 06:08:04.415220] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.666 [2024-07-11 06:08:04.415232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.666 [2024-07-11 06:08:04.415270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.925 [2024-07-11 06:08:04.638315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.184 [2024-07-11 06:08:05.078927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.184 [2024-07-11 06:08:05.086969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.184 null0 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.184 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.443 null1 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=82216 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 82216 /tmp/host.sock 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 82216 ']' 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:49.443 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.443 06:08:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.443 [2024-07-11 06:08:05.244079] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:21:49.443 [2024-07-11 06:08:05.244272] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82216 ] 00:21:49.702 [2024-07-11 06:08:05.423712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.962 [2024-07-11 06:08:05.669490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.962 [2024-07-11 06:08:05.879472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:50.554 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.813 [2024-07-11 06:08:06.620166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.813 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.814 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:50.814 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.814 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:50.814 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:51.073 06:08:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:51.640 [2024-07-11 06:08:07.254448] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:51.640 [2024-07-11 06:08:07.254526] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:51.640 [2024-07-11 06:08:07.254584] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:51.640 [2024-07-11 06:08:07.260545] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:51.640 [2024-07-11 06:08:07.326238] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:51.640 [2024-07-11 06:08:07.326324] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.208 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.209 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.209 06:08:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.209 06:08:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.209 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:52.468 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:52.468 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.468 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.468 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.469 [2024-07-11 06:08:08.231978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:52.469 [2024-07-11 06:08:08.232863] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:52.469 [2024-07-11 06:08:08.232919] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:52.469 [2024-07-11 06:08:08.238895] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.469 [2024-07-11 06:08:08.297419] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:52.469 [2024-07-11 06:08:08.297505] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:52.469 [2024-07-11 06:08:08.297541] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.469 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.729 [2024-07-11 06:08:08.457293] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:52.729 [2024-07-11 06:08:08.457375] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:52.729 [2024-07-11 06:08:08.462805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.729 [2024-07-11 06:08:08.462855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.729 [2024-07-11 06:08:08.462877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.729 [2024-07-11 06:08:08.462892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.729 [2024-07-11 06:08:08.462907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.729 [2024-07-11 06:08:08.462920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.729 [2024-07-11 06:08:08.462934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.729 [2024-07-11 06:08:08.462948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.729 [2024-07-11 06:08:08.462961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:21:52.729 [2024-07-11 06:08:08.463336] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:52.729 [2024-07-11 06:08:08.463378] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:52.729 [2024-07-11 06:08:08.463484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.729 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.988 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:52.988 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:52.988 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:52.988 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.988 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:52.988 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.988 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.989 06:08:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 [2024-07-11 06:08:09.895300] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:54.366 [2024-07-11 06:08:09.895347] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:54.366 [2024-07-11 06:08:09.895396] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:54.366 [2024-07-11 06:08:09.901379] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:54.366 [2024-07-11 06:08:09.972215] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:54.366 [2024-07-11 06:08:09.972344] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 request: 00:21:54.366 { 00:21:54.366 "name": "nvme", 00:21:54.366 "trtype": "tcp", 00:21:54.366 "traddr": "10.0.0.2", 00:21:54.366 "adrfam": "ipv4", 00:21:54.366 "trsvcid": "8009", 00:21:54.366 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:54.366 "wait_for_attach": true, 00:21:54.366 "method": "bdev_nvme_start_discovery", 00:21:54.366 "req_id": 1 00:21:54.366 } 00:21:54.366 Got JSON-RPC error response 00:21:54.366 response: 00:21:54.366 { 00:21:54.366 "code": -17, 00:21:54.366 "message": "File exists" 00:21:54.366 } 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:54.366 06:08:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 request: 00:21:54.366 { 00:21:54.366 "name": "nvme_second", 00:21:54.366 "trtype": "tcp", 00:21:54.366 "traddr": "10.0.0.2", 00:21:54.366 "adrfam": "ipv4", 00:21:54.366 "trsvcid": "8009", 00:21:54.366 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:54.366 "wait_for_attach": true, 00:21:54.366 "method": "bdev_nvme_start_discovery", 00:21:54.366 "req_id": 1 00:21:54.366 } 00:21:54.366 Got JSON-RPC error response 00:21:54.366 response: 00:21:54.366 { 00:21:54.366 "code": -17, 00:21:54.366 "message": "File exists" 00:21:54.366 } 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.366 06:08:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.743 [2024-07-11 06:08:11.261001] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.743 [2024-07-11 06:08:11.261082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bc80 with addr=10.0.0.2, port=8010 00:21:55.743 [2024-07-11 06:08:11.261150] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:55.743 [2024-07-11 06:08:11.261169] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:55.743 [2024-07-11 06:08:11.261184] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:56.677 [2024-07-11 06:08:12.261007] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.678 [2024-07-11 06:08:12.261071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bf00 with addr=10.0.0.2, port=8010 00:21:56.678 [2024-07-11 06:08:12.261133] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:56.678 [2024-07-11 06:08:12.261150] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:56.678 [2024-07-11 06:08:12.261165] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:57.614 [2024-07-11 06:08:13.260737] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:57.614 request: 00:21:57.614 { 00:21:57.614 "name": "nvme_second", 00:21:57.614 "trtype": "tcp", 00:21:57.614 "traddr": "10.0.0.2", 00:21:57.614 "adrfam": "ipv4", 00:21:57.614 "trsvcid": "8010", 00:21:57.614 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:57.614 "wait_for_attach": false, 00:21:57.614 "attach_timeout_ms": 3000, 00:21:57.614 "method": "bdev_nvme_start_discovery", 00:21:57.614 "req_id": 1 00:21:57.614 } 00:21:57.614 Got JSON-RPC error response 00:21:57.614 response: 00:21:57.614 { 00:21:57.614 "code": -110, 00:21:57.614 "message": "Connection timed out" 00:21:57.614 } 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 82216 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:57.614 rmmod nvme_tcp 00:21:57.614 rmmod nvme_fabrics 00:21:57.614 rmmod nvme_keyring 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 82178 ']' 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 82178 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 82178 ']' 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 82178 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82178 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:57.614 killing process with pid 82178 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82178' 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 82178 00:21:57.614 06:08:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 82178 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:59.007 00:21:59.007 real 0m11.339s 00:21:59.007 user 0m21.846s 00:21:59.007 sys 0m2.142s 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.007 ************************************ 00:21:59.007 END TEST nvmf_host_discovery 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.007 ************************************ 00:21:59.007 06:08:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:59.007 06:08:14 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:59.007 06:08:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:59.007 06:08:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.007 06:08:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.007 ************************************ 00:21:59.007 START TEST nvmf_host_multipath_status 00:21:59.007 ************************************ 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:59.007 * Looking for test storage... 00:21:59.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.007 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:59.266 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:59.266 Cannot find device "nvmf_tgt_br" 00:21:59.266 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:21:59.266 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.266 Cannot find device "nvmf_tgt_br2" 00:21:59.266 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:21:59.266 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:59.266 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:59.266 Cannot find device "nvmf_tgt_br" 00:21:59.266 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:21:59.266 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:59.266 Cannot find device "nvmf_tgt_br2" 00:21:59.266 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:21:59.266 06:08:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.266 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:59.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:21:59.526 00:21:59.526 --- 10.0.0.2 ping statistics --- 00:21:59.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.526 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:59.526 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.526 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:21:59.526 00:21:59.526 --- 10.0.0.3 ping statistics --- 00:21:59.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.526 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:21:59.526 00:21:59.526 --- 10.0.0.1 ping statistics --- 00:21:59.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.526 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=82672 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 82672 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 82672 ']' 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.526 06:08:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:59.526 [2024-07-11 06:08:15.410339] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:21:59.526 [2024-07-11 06:08:15.410552] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.785 [2024-07-11 06:08:15.594052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:00.044 [2024-07-11 06:08:15.818175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.044 [2024-07-11 06:08:15.818282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.044 [2024-07-11 06:08:15.818299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.044 [2024-07-11 06:08:15.818313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.044 [2024-07-11 06:08:15.818325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.044 [2024-07-11 06:08:15.818480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.044 [2024-07-11 06:08:15.818480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.302 [2024-07-11 06:08:16.028894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:00.560 06:08:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.560 06:08:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:00.560 06:08:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.560 06:08:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.560 06:08:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:00.560 06:08:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.560 06:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82672 00:22:00.560 06:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:00.818 [2024-07-11 06:08:16.663222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.818 06:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:01.076 Malloc0 00:22:01.334 06:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:01.592 06:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:01.851 06:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.109 [2024-07-11 06:08:17.813226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.109 06:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:02.367 [2024-07-11 06:08:18.093438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.367 06:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82728 00:22:02.368 06:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:02.368 06:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.368 06:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82728 /var/tmp/bdevperf.sock 00:22:02.368 06:08:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 82728 ']' 00:22:02.368 06:08:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.368 06:08:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.368 06:08:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.368 06:08:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.368 06:08:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:03.302 06:08:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.302 06:08:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:03.302 06:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:03.560 06:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:04.126 Nvme0n1 00:22:04.126 06:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:04.385 Nvme0n1 00:22:04.385 06:08:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:04.385 06:08:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:06.322 06:08:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:06.322 06:08:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:06.580 06:08:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:06.839 06:08:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:07.775 06:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:07.775 06:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:07.775 06:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.775 06:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:08.036 06:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.036 06:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:08.036 06:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.036 06:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:08.295 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.295 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:08.553 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.553 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.812 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.812 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:08.812 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.812 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:09.071 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.071 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:09.071 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.071 06:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:09.330 06:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.330 06:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:09.330 06:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.330 06:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:09.589 06:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.589 06:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:09.589 06:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:09.848 06:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:10.107 06:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:11.043 06:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:11.043 06:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:11.043 06:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.043 06:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:11.302 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:11.302 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:11.302 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.302 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:11.561 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.561 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:11.561 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:11.561 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.819 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.820 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:11.820 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.820 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:12.078 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.078 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:12.078 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.078 06:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:12.337 06:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.337 06:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:12.337 06:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.338 06:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:12.596 06:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.596 06:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:12.596 06:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:13.163 06:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:13.163 06:08:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:14.540 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:14.540 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:14.540 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.540 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:14.540 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.540 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:14.540 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.540 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:14.799 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.799 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:14.799 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:14.799 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.058 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.058 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.058 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.058 06:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.316 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.316 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:15.316 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.316 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.574 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.574 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:15.574 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.574 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:15.833 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.833 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:15.833 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:16.092 06:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:16.350 06:08:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:17.285 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:17.285 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:17.285 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.285 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.544 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.544 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:17.544 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.544 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:18.109 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.109 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:18.109 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.109 06:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:18.367 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.367 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.367 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.367 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.625 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.625 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:18.625 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.625 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.883 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.883 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:18.883 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:18.883 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.141 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.141 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:19.141 06:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:19.400 06:08:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:19.658 06:08:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:20.625 06:08:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:20.625 06:08:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:20.625 06:08:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.625 06:08:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:20.883 06:08:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.883 06:08:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:20.883 06:08:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.883 06:08:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:21.142 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.142 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:21.142 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.142 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.401 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.401 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.401 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.401 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.659 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.659 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:21.659 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.659 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:21.917 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.917 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:21.917 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.917 06:08:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:22.176 06:08:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.176 06:08:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:22.176 06:08:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:22.434 06:08:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:23.001 06:08:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:23.937 06:08:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:23.937 06:08:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:23.937 06:08:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.937 06:08:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:24.195 06:08:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.195 06:08:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:24.195 06:08:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.195 06:08:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.453 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.453 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.453 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.453 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.711 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.711 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.711 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.711 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.970 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.970 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:24.970 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.970 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:25.228 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.228 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:25.228 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.228 06:08:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.486 06:08:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.486 06:08:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:25.745 06:08:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:25.745 06:08:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:26.003 06:08:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:26.262 06:08:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:27.637 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:27.637 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:27.637 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.637 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:27.637 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.637 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:27.637 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.637 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:27.895 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.895 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:27.895 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.895 06:08:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.153 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.153 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.153 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.153 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:28.410 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.410 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:28.410 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:28.410 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.669 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.669 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:28.669 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.669 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:28.927 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.927 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:28.927 06:08:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:29.493 06:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:29.493 06:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:30.870 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:30.870 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:30.870 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.870 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:30.870 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:30.870 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:30.870 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.870 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.128 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.128 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.128 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.128 06:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.387 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.387 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.387 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.387 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:31.644 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.644 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:31.644 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:31.644 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.902 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.902 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:31.902 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.902 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.160 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.160 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:32.160 06:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:32.419 06:08:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:32.677 06:08:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:33.613 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:33.613 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:33.613 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.613 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:33.872 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.872 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:33.872 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:33.872 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.131 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.131 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:34.131 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:34.131 06:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.390 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.390 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:34.390 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.390 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:34.648 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.648 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:34.648 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.648 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:34.914 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.914 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:34.914 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.915 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:35.198 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.198 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:35.198 06:08:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:35.463 06:08:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:35.721 06:08:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:36.658 06:08:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:36.658 06:08:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:36.658 06:08:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.658 06:08:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:36.916 06:08:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.916 06:08:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:36.916 06:08:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.916 06:08:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:37.175 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.175 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:37.175 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.175 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:37.743 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.743 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:37.743 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.743 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:37.743 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.743 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:37.743 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.743 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:38.001 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.001 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:38.001 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.001 06:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82728 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 82728 ']' 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 82728 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82728 00:22:38.260 killing process with pid 82728 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82728' 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 82728 00:22:38.260 06:08:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 82728 00:22:39.197 Connection closed with partial response: 00:22:39.197 00:22:39.197 00:22:39.462 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82728 00:22:39.462 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:39.462 [2024-07-11 06:08:18.203768] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:22:39.462 [2024-07-11 06:08:18.203931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82728 ] 00:22:39.462 [2024-07-11 06:08:18.371955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.462 [2024-07-11 06:08:18.612157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.462 [2024-07-11 06:08:18.815855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:39.462 Running I/O for 90 seconds... 00:22:39.462 [2024-07-11 06:08:35.155195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.462 [2024-07-11 06:08:35.155376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.155470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.462 [2024-07-11 06:08:35.155500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.155533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.462 [2024-07-11 06:08:35.155555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.155584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.462 [2024-07-11 06:08:35.155605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.155634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.462 [2024-07-11 06:08:35.155689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.155721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.462 [2024-07-11 06:08:35.155758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.155787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.462 [2024-07-11 06:08:35.155807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.155836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.462 [2024-07-11 06:08:35.155855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.155902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.462 [2024-07-11 06:08:35.155937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.155965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.462 [2024-07-11 06:08:35.156002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.156047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.462 [2024-07-11 06:08:35.156107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.156140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.462 [2024-07-11 06:08:35.156173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:39.462 [2024-07-11 06:08:35.156202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.462 [2024-07-11 06:08:35.156223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.156915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.156978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.157413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.157483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.157536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.157601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.157671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.157726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.157776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.463 [2024-07-11 06:08:35.157830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.157960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.157981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.463 [2024-07-11 06:08:35.158598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:39.463 [2024-07-11 06:08:35.158627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.158648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.158677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.158710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.158767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.158794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.158826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.158848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.158877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.158899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.158939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.158963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.158994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.159016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.159072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.159123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.159173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.159962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.159989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.160040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.160090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.160140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.160189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.160248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.160308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.160372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.160422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.464 [2024-07-11 06:08:35.160471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.160522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.160572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.160621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.160686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.160736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.160785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.160839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:39.464 [2024-07-11 06:08:35.160878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.464 [2024-07-11 06:08:35.160899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.160939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:35.160962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.160991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:35.161028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.161058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:35.161080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.161110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:35.161132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.161175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:35.161195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.161223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:35.161243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.161272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:35.161292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.162327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:35.162367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.162430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.162460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.162501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.162530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.162569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.162591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.162629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.162667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.162725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.162779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.162819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.162841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.162880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.162905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.162966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.162994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.163931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.163984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.164008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:35.164047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:35.164069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:51.500089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:51.500175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:51.500227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:51.500252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:51.500284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:51.500305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:51.500345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:51.500368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:51.500398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:51.500442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:51.500475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:51.500497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:51.500525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.465 [2024-07-11 06:08:51.500546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:51.500575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:51.500595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:51.500624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.465 [2024-07-11 06:08:51.500658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:39.465 [2024-07-11 06:08:51.500692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.500712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.500741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.500761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.500791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.500811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.500840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.500859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.500888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.500908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.500937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.500957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.500986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.501006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.501055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.501143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.501193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.501242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.501291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.501356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.501409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.501458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.501507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.501556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.501605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.501672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.501724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.501788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.501838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.501889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.501938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.501968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.501988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.502016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.502036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.502066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.502086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.502115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.502135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.502163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.502183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.502212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.502232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.502261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.502281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.502310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.466 [2024-07-11 06:08:51.502330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.502359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.466 [2024-07-11 06:08:51.502388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:39.466 [2024-07-11 06:08:51.502418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.502439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.502469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.502489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.502518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.502538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.502566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.502586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.502615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.502636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.502797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.502823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.502852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.502873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.502902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.502922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.502951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.502971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.503020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.503069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.503130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.503458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.503509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.503559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.503913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.503962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.503990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.504011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.504039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.504059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.504087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.504107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.504136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.504156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.504184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.504204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.504232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.504253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.504281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.504301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.504341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.504364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.506266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.506308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.506364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.506391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.506423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.506444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.506473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.506493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.506545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.506567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.506596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.467 [2024-07-11 06:08:51.506616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.506660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.467 [2024-07-11 06:08:51.506684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:39.467 [2024-07-11 06:08:51.506713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.506734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.506763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.506783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.506811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.506831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.506860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.506880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.506910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.506931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.506983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.507074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.507127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.507325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.507523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.507571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.507620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.507909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.507958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.507986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.508006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.508035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.508056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.508085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.508105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.508134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.508154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.508186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.508206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.508235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.508254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.508283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.508303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.508354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.508377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.508407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.508428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.509935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.509977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.510017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.510041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.510071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.510092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.510122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.510142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.510172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.510192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.510221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.510241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.510270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.510291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.510319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.468 [2024-07-11 06:08:51.510339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.510368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.468 [2024-07-11 06:08:51.510388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:39.468 [2024-07-11 06:08:51.510539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.510692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.510749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.510773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.510803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.510823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.510854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.510875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.510924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.510949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.510980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.511002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.511068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.511169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.511218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.511425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.511748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.511914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.511963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.511991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.512011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.512039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.512059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.512088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.512119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.512151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.512172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.513614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.513655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.513696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.513735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.513769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.513790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.513820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.513841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.513870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.513890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.513920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.513941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.513970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.513990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.514019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.514039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.514068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.514089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.514117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.514137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.514165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.514186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.514231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.469 [2024-07-11 06:08:51.514254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.514296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.514317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.514383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.514409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.514455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.469 [2024-07-11 06:08:51.514475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:39.469 [2024-07-11 06:08:51.514518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.514539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.514568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.514588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.514617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.514637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.514665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.514686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.514838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.514873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.514904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.515088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.515134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.515157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.515187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.515208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.515251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.515274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.515304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.515324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.515354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.515374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.516738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.516778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.516819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.516842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.516874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.516895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.516924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.516959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.517024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.517103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.517152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.517201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.517249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.517313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.517367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.517417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.517465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.517513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.517561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.517611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.517687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.517757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.517806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.517854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.517883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.517904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.519317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.519368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.519409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.519432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.519479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.519500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.519542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.519561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.519589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.470 [2024-07-11 06:08:51.519609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.519637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.470 [2024-07-11 06:08:51.519672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:39.470 [2024-07-11 06:08:51.519717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.471 [2024-07-11 06:08:51.519737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:39.471 [2024-07-11 06:08:51.519783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.471 [2024-07-11 06:08:51.519808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:39.471 [2024-07-11 06:08:51.519838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.471 [2024-07-11 06:08:51.519858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:39.471 [2024-07-11 06:08:51.519907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.471 [2024-07-11 06:08:51.519928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.519956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.519977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.520006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.520026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.520054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.520086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.520117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.520138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.520167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.520187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.520215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.520235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.520264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.520284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.520313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.520352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.521612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.521651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.521691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.521732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.521766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.521787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.521817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.521837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.521865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.521885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.521914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.521934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.521962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.521997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.522075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.522121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.522166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.522245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.522294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.522370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.522421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.522469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.522517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.522580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.522658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.522707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.522763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.522789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.523791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.523841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.523908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.523935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.523967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.523988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.524018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.524039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.524068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.524103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.524161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.524181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.524209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.472 [2024-07-11 06:08:51.524229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:39.472 [2024-07-11 06:08:51.524257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.472 [2024-07-11 06:08:51.524277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:39.472 Received shutdown signal, test time was about 33.948399 seconds 00:22:39.472 00:22:39.472 Latency(us) 00:22:39.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.472 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:39.472 Verification LBA range: start 0x0 length 0x4000 00:22:39.472 Nvme0n1 : 33.95 6406.17 25.02 0.00 0.00 19938.47 309.06 4026531.84 00:22:39.472 =================================================================================================================== 00:22:39.472 Total : 6406.17 25.02 0.00 0.00 19938.47 309.06 4026531.84 00:22:39.472 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.040 rmmod nvme_tcp 00:22:40.040 rmmod nvme_fabrics 00:22:40.040 rmmod nvme_keyring 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 82672 ']' 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 82672 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 82672 ']' 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 82672 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82672 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.040 killing process with pid 82672 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82672' 00:22:40.040 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 82672 00:22:40.041 06:08:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 82672 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:41.419 00:22:41.419 real 0m42.506s 00:22:41.419 user 2m15.555s 00:22:41.419 sys 0m11.149s 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.419 06:08:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:41.419 ************************************ 00:22:41.419 END TEST nvmf_host_multipath_status 00:22:41.419 ************************************ 00:22:41.679 06:08:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:41.679 06:08:57 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:41.679 06:08:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:41.679 06:08:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.679 06:08:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.679 ************************************ 00:22:41.679 START TEST nvmf_discovery_remove_ifc 00:22:41.679 ************************************ 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:41.679 * Looking for test storage... 00:22:41.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.679 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:41.680 Cannot find device "nvmf_tgt_br" 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:41.680 Cannot find device "nvmf_tgt_br2" 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:41.680 Cannot find device "nvmf_tgt_br" 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:41.680 Cannot find device "nvmf_tgt_br2" 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:41.680 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:41.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:41.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:41.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:22:41.940 00:22:41.940 --- 10.0.0.2 ping statistics --- 00:22:41.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.940 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:41.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:41.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:22:41.940 00:22:41.940 --- 10.0.0.3 ping statistics --- 00:22:41.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.940 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:41.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:22:41.940 00:22:41.940 --- 10.0.0.1 ping statistics --- 00:22:41.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.940 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=83530 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 83530 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 83530 ']' 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.940 06:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.199 [2024-07-11 06:08:57.947719] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:22:42.199 [2024-07-11 06:08:57.947930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.458 [2024-07-11 06:08:58.127337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.458 [2024-07-11 06:08:58.351123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.458 [2024-07-11 06:08:58.351190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.458 [2024-07-11 06:08:58.351231] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.458 [2024-07-11 06:08:58.351287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.458 [2024-07-11 06:08:58.351305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.458 [2024-07-11 06:08:58.351404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.717 [2024-07-11 06:08:58.567047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:43.286 06:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.286 06:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:43.286 06:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:43.286 06:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:43.286 06:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.286 06:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.286 06:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:43.286 06:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.286 06:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.286 [2024-07-11 06:08:58.956627] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.286 [2024-07-11 06:08:58.964784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:43.286 null0 00:22:43.286 [2024-07-11 06:08:58.996929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.286 06:08:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.286 06:08:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=83561 00:22:43.286 06:08:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:43.286 06:08:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83561 /tmp/host.sock 00:22:43.286 06:08:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 83561 ']' 00:22:43.286 06:08:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:43.286 06:08:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.286 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:43.286 06:08:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:43.286 06:08:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.286 06:08:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.286 [2024-07-11 06:08:59.139299] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:22:43.286 [2024-07-11 06:08:59.139457] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83561 ] 00:22:43.545 [2024-07-11 06:08:59.315467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.804 [2024-07-11 06:08:59.540592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.371 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.371 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:44.371 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.371 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:44.371 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.371 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.371 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.371 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:44.371 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.371 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.630 [2024-07-11 06:09:00.302106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:44.630 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.630 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:44.630 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.630 06:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.565 [2024-07-11 06:09:01.431700] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:45.565 [2024-07-11 06:09:01.431914] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:45.565 [2024-07-11 06:09:01.432000] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.565 [2024-07-11 06:09:01.437792] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:45.824 [2024-07-11 06:09:01.504473] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:45.824 [2024-07-11 06:09:01.504713] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:45.824 [2024-07-11 06:09:01.504837] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:45.824 [2024-07-11 06:09:01.504940] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:45.824 [2024-07-11 06:09:01.505115] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.824 [2024-07-11 06:09:01.510257] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b000 was disconnected and freed. delete nvme_qpair. 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:45.824 06:09:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.762 06:09:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.762 06:09:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.762 06:09:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.762 06:09:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.762 06:09:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.762 06:09:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.762 06:09:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.762 06:09:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.020 06:09:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:47.020 06:09:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:47.956 06:09:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.956 06:09:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.956 06:09:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.956 06:09:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.956 06:09:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.956 06:09:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.956 06:09:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.956 06:09:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.956 06:09:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:47.956 06:09:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:48.889 06:09:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:48.889 06:09:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.889 06:09:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:48.889 06:09:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.889 06:09:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.890 06:09:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:48.890 06:09:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:48.890 06:09:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.148 06:09:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:49.148 06:09:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.113 06:09:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.113 06:09:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.113 06:09:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.113 06:09:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.113 06:09:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.113 06:09:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.113 06:09:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.113 06:09:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.113 06:09:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:50.113 06:09:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:51.047 06:09:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.047 06:09:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.047 06:09:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.047 06:09:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.047 06:09:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.047 06:09:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.047 06:09:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.047 [2024-07-11 06:09:06.931886] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:51.047 [2024-07-11 06:09:06.932030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.047 [2024-07-11 06:09:06.932054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.047 [2024-07-11 06:09:06.932074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.047 [2024-07-11 06:09:06.932089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.047 [2024-07-11 06:09:06.932103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.047 [2024-07-11 06:09:06.932116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.047 [2024-07-11 06:09:06.932131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.047 [2024-07-11 06:09:06.932144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.047 [2024-07-11 06:09:06.932159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.047 [2024-07-11 06:09:06.932173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.047 [2024-07-11 06:09:06.932186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:22:51.047 06:09:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.047 [2024-07-11 06:09:06.941876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:51.047 [2024-07-11 06:09:06.951906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:51.305 06:09:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:51.305 06:09:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:52.262 06:09:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.262 06:09:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.262 06:09:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.262 06:09:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.262 06:09:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.262 06:09:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.262 06:09:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.262 [2024-07-11 06:09:08.015769] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:22:52.262 [2024-07-11 06:09:08.015928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4420 00:22:52.262 [2024-07-11 06:09:08.015974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:22:52.262 [2024-07-11 06:09:08.016058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:52.262 [2024-07-11 06:09:08.017284] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:52.262 [2024-07-11 06:09:08.017400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:52.262 [2024-07-11 06:09:08.017433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:52.262 [2024-07-11 06:09:08.017462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:52.262 [2024-07-11 06:09:08.017563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.262 [2024-07-11 06:09:08.017598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:52.262 06:09:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.262 06:09:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:52.262 06:09:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:53.196 [2024-07-11 06:09:09.017764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:53.196 [2024-07-11 06:09:09.017859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:53.196 [2024-07-11 06:09:09.017879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:53.196 [2024-07-11 06:09:09.017894] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:53.196 [2024-07-11 06:09:09.017931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.196 [2024-07-11 06:09:09.018005] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:53.196 [2024-07-11 06:09:09.018119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.196 [2024-07-11 06:09:09.018174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.196 [2024-07-11 06:09:09.018209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.196 [2024-07-11 06:09:09.018224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.196 [2024-07-11 06:09:09.018238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.196 [2024-07-11 06:09:09.018252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.196 [2024-07-11 06:09:09.018267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.196 [2024-07-11 06:09:09.018280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.196 [2024-07-11 06:09:09.018295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.196 [2024-07-11 06:09:09.018319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.196 [2024-07-11 06:09:09.018332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:53.196 [2024-07-11 06:09:09.018416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:22:53.196 [2024-07-11 06:09:09.019396] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:53.196 [2024-07-11 06:09:09.019491] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:53.196 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.196 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.196 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.196 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.196 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.196 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.196 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.196 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.196 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:53.196 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:53.454 06:09:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.388 06:09:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.388 06:09:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.388 06:09:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.388 06:09:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:54.388 06:09:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.388 06:09:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.388 06:09:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.388 06:09:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.388 06:09:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:54.388 06:09:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:55.324 [2024-07-11 06:09:11.029203] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:55.324 [2024-07-11 06:09:11.029295] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:55.324 [2024-07-11 06:09:11.029338] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:55.324 [2024-07-11 06:09:11.035341] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:55.324 [2024-07-11 06:09:11.101167] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:55.324 [2024-07-11 06:09:11.101267] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:55.324 [2024-07-11 06:09:11.101337] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:55.324 [2024-07-11 06:09:11.101365] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:55.324 [2024-07-11 06:09:11.101382] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:55.324 [2024-07-11 06:09:11.108290] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b780 was disconnected and freed. delete nvme_qpair. 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 83561 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 83561 ']' 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 83561 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83561 00:22:55.583 killing process with pid 83561 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83561' 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 83561 00:22:55.583 06:09:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 83561 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.960 rmmod nvme_tcp 00:22:56.960 rmmod nvme_fabrics 00:22:56.960 rmmod nvme_keyring 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 83530 ']' 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 83530 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 83530 ']' 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 83530 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.960 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83530 00:22:56.961 killing process with pid 83530 00:22:56.961 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:56.961 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:56.961 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83530' 00:22:56.961 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 83530 00:22:56.961 06:09:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 83530 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:58.337 00:22:58.337 real 0m16.698s 00:22:58.337 user 0m28.462s 00:22:58.337 sys 0m2.502s 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:58.337 06:09:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.337 ************************************ 00:22:58.337 END TEST nvmf_discovery_remove_ifc 00:22:58.337 ************************************ 00:22:58.337 06:09:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:58.337 06:09:14 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:58.338 06:09:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:58.338 06:09:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:58.338 06:09:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:58.338 ************************************ 00:22:58.338 START TEST nvmf_identify_kernel_target 00:22:58.338 ************************************ 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:58.338 * Looking for test storage... 00:22:58.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:58.338 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:58.597 Cannot find device "nvmf_tgt_br" 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:58.597 Cannot find device "nvmf_tgt_br2" 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:58.597 Cannot find device "nvmf_tgt_br" 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:58.597 Cannot find device "nvmf_tgt_br2" 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:58.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:58.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:58.597 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:58.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:22:58.856 00:22:58.856 --- 10.0.0.2 ping statistics --- 00:22:58.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.856 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:58.856 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:58.856 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:22:58.856 00:22:58.856 --- 10.0.0.3 ping statistics --- 00:22:58.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.856 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:58.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:58.856 00:22:58.856 --- 10.0.0.1 ping statistics --- 00:22:58.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.856 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:58.856 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:58.857 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:58.857 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:58.857 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:58.857 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:58.857 06:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:59.115 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:59.115 Waiting for block devices as requested 00:22:59.115 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:59.374 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:59.374 No valid GPT data, bailing 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:22:59.374 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:59.634 No valid GPT data, bailing 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:59.634 No valid GPT data, bailing 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:59.634 No valid GPT data, bailing 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -a 10.0.0.1 -t tcp -s 4420 00:22:59.634 00:22:59.634 Discovery Log Number of Records 2, Generation counter 2 00:22:59.634 =====Discovery Log Entry 0====== 00:22:59.634 trtype: tcp 00:22:59.634 adrfam: ipv4 00:22:59.634 subtype: current discovery subsystem 00:22:59.634 treq: not specified, sq flow control disable supported 00:22:59.634 portid: 1 00:22:59.634 trsvcid: 4420 00:22:59.634 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:59.634 traddr: 10.0.0.1 00:22:59.634 eflags: none 00:22:59.634 sectype: none 00:22:59.634 =====Discovery Log Entry 1====== 00:22:59.634 trtype: tcp 00:22:59.634 adrfam: ipv4 00:22:59.634 subtype: nvme subsystem 00:22:59.634 treq: not specified, sq flow control disable supported 00:22:59.634 portid: 1 00:22:59.634 trsvcid: 4420 00:22:59.634 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:59.634 traddr: 10.0.0.1 00:22:59.634 eflags: none 00:22:59.634 sectype: none 00:22:59.634 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:59.634 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:59.894 ===================================================== 00:22:59.894 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:59.894 ===================================================== 00:22:59.894 Controller Capabilities/Features 00:22:59.894 ================================ 00:22:59.894 Vendor ID: 0000 00:22:59.894 Subsystem Vendor ID: 0000 00:22:59.894 Serial Number: 7cfb284d2932b0e69ca5 00:22:59.894 Model Number: Linux 00:22:59.894 Firmware Version: 6.7.0-68 00:22:59.894 Recommended Arb Burst: 0 00:22:59.894 IEEE OUI Identifier: 00 00 00 00:22:59.894 Multi-path I/O 00:22:59.894 May have multiple subsystem ports: No 00:22:59.894 May have multiple controllers: No 00:22:59.894 Associated with SR-IOV VF: No 00:22:59.894 Max Data Transfer Size: Unlimited 00:22:59.894 Max Number of Namespaces: 0 00:22:59.894 Max Number of I/O Queues: 1024 00:22:59.894 NVMe Specification Version (VS): 1.3 00:22:59.894 NVMe Specification Version (Identify): 1.3 00:22:59.894 Maximum Queue Entries: 1024 00:22:59.894 Contiguous Queues Required: No 00:22:59.894 Arbitration Mechanisms Supported 00:22:59.894 Weighted Round Robin: Not Supported 00:22:59.894 Vendor Specific: Not Supported 00:22:59.894 Reset Timeout: 7500 ms 00:22:59.894 Doorbell Stride: 4 bytes 00:22:59.894 NVM Subsystem Reset: Not Supported 00:22:59.894 Command Sets Supported 00:22:59.894 NVM Command Set: Supported 00:22:59.894 Boot Partition: Not Supported 00:22:59.894 Memory Page Size Minimum: 4096 bytes 00:22:59.894 Memory Page Size Maximum: 4096 bytes 00:22:59.894 Persistent Memory Region: Not Supported 00:22:59.894 Optional Asynchronous Events Supported 00:22:59.894 Namespace Attribute Notices: Not Supported 00:22:59.894 Firmware Activation Notices: Not Supported 00:22:59.894 ANA Change Notices: Not Supported 00:22:59.894 PLE Aggregate Log Change Notices: Not Supported 00:22:59.894 LBA Status Info Alert Notices: Not Supported 00:22:59.894 EGE Aggregate Log Change Notices: Not Supported 00:22:59.894 Normal NVM Subsystem Shutdown event: Not Supported 00:22:59.894 Zone Descriptor Change Notices: Not Supported 00:22:59.894 Discovery Log Change Notices: Supported 00:22:59.894 Controller Attributes 00:22:59.894 128-bit Host Identifier: Not Supported 00:22:59.894 Non-Operational Permissive Mode: Not Supported 00:22:59.894 NVM Sets: Not Supported 00:22:59.894 Read Recovery Levels: Not Supported 00:22:59.894 Endurance Groups: Not Supported 00:22:59.894 Predictable Latency Mode: Not Supported 00:22:59.894 Traffic Based Keep ALive: Not Supported 00:22:59.894 Namespace Granularity: Not Supported 00:22:59.894 SQ Associations: Not Supported 00:22:59.894 UUID List: Not Supported 00:22:59.894 Multi-Domain Subsystem: Not Supported 00:22:59.894 Fixed Capacity Management: Not Supported 00:22:59.894 Variable Capacity Management: Not Supported 00:22:59.894 Delete Endurance Group: Not Supported 00:22:59.894 Delete NVM Set: Not Supported 00:22:59.894 Extended LBA Formats Supported: Not Supported 00:22:59.894 Flexible Data Placement Supported: Not Supported 00:22:59.894 00:22:59.894 Controller Memory Buffer Support 00:22:59.894 ================================ 00:22:59.894 Supported: No 00:22:59.894 00:22:59.894 Persistent Memory Region Support 00:22:59.894 ================================ 00:22:59.894 Supported: No 00:22:59.894 00:22:59.894 Admin Command Set Attributes 00:22:59.894 ============================ 00:22:59.894 Security Send/Receive: Not Supported 00:22:59.894 Format NVM: Not Supported 00:22:59.894 Firmware Activate/Download: Not Supported 00:22:59.894 Namespace Management: Not Supported 00:22:59.894 Device Self-Test: Not Supported 00:22:59.894 Directives: Not Supported 00:22:59.894 NVMe-MI: Not Supported 00:22:59.894 Virtualization Management: Not Supported 00:22:59.894 Doorbell Buffer Config: Not Supported 00:22:59.894 Get LBA Status Capability: Not Supported 00:22:59.894 Command & Feature Lockdown Capability: Not Supported 00:22:59.894 Abort Command Limit: 1 00:22:59.894 Async Event Request Limit: 1 00:22:59.894 Number of Firmware Slots: N/A 00:22:59.894 Firmware Slot 1 Read-Only: N/A 00:23:00.154 Firmware Activation Without Reset: N/A 00:23:00.154 Multiple Update Detection Support: N/A 00:23:00.154 Firmware Update Granularity: No Information Provided 00:23:00.154 Per-Namespace SMART Log: No 00:23:00.154 Asymmetric Namespace Access Log Page: Not Supported 00:23:00.154 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:00.154 Command Effects Log Page: Not Supported 00:23:00.154 Get Log Page Extended Data: Supported 00:23:00.154 Telemetry Log Pages: Not Supported 00:23:00.154 Persistent Event Log Pages: Not Supported 00:23:00.154 Supported Log Pages Log Page: May Support 00:23:00.154 Commands Supported & Effects Log Page: Not Supported 00:23:00.154 Feature Identifiers & Effects Log Page:May Support 00:23:00.154 NVMe-MI Commands & Effects Log Page: May Support 00:23:00.154 Data Area 4 for Telemetry Log: Not Supported 00:23:00.154 Error Log Page Entries Supported: 1 00:23:00.154 Keep Alive: Not Supported 00:23:00.154 00:23:00.154 NVM Command Set Attributes 00:23:00.154 ========================== 00:23:00.154 Submission Queue Entry Size 00:23:00.154 Max: 1 00:23:00.154 Min: 1 00:23:00.154 Completion Queue Entry Size 00:23:00.154 Max: 1 00:23:00.154 Min: 1 00:23:00.154 Number of Namespaces: 0 00:23:00.154 Compare Command: Not Supported 00:23:00.154 Write Uncorrectable Command: Not Supported 00:23:00.154 Dataset Management Command: Not Supported 00:23:00.154 Write Zeroes Command: Not Supported 00:23:00.154 Set Features Save Field: Not Supported 00:23:00.154 Reservations: Not Supported 00:23:00.154 Timestamp: Not Supported 00:23:00.154 Copy: Not Supported 00:23:00.154 Volatile Write Cache: Not Present 00:23:00.154 Atomic Write Unit (Normal): 1 00:23:00.154 Atomic Write Unit (PFail): 1 00:23:00.154 Atomic Compare & Write Unit: 1 00:23:00.154 Fused Compare & Write: Not Supported 00:23:00.154 Scatter-Gather List 00:23:00.154 SGL Command Set: Supported 00:23:00.154 SGL Keyed: Not Supported 00:23:00.154 SGL Bit Bucket Descriptor: Not Supported 00:23:00.154 SGL Metadata Pointer: Not Supported 00:23:00.154 Oversized SGL: Not Supported 00:23:00.154 SGL Metadata Address: Not Supported 00:23:00.154 SGL Offset: Supported 00:23:00.154 Transport SGL Data Block: Not Supported 00:23:00.154 Replay Protected Memory Block: Not Supported 00:23:00.154 00:23:00.155 Firmware Slot Information 00:23:00.155 ========================= 00:23:00.155 Active slot: 0 00:23:00.155 00:23:00.155 00:23:00.155 Error Log 00:23:00.155 ========= 00:23:00.155 00:23:00.155 Active Namespaces 00:23:00.155 ================= 00:23:00.155 Discovery Log Page 00:23:00.155 ================== 00:23:00.155 Generation Counter: 2 00:23:00.155 Number of Records: 2 00:23:00.155 Record Format: 0 00:23:00.155 00:23:00.155 Discovery Log Entry 0 00:23:00.155 ---------------------- 00:23:00.155 Transport Type: 3 (TCP) 00:23:00.155 Address Family: 1 (IPv4) 00:23:00.155 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:00.155 Entry Flags: 00:23:00.155 Duplicate Returned Information: 0 00:23:00.155 Explicit Persistent Connection Support for Discovery: 0 00:23:00.155 Transport Requirements: 00:23:00.155 Secure Channel: Not Specified 00:23:00.155 Port ID: 1 (0x0001) 00:23:00.155 Controller ID: 65535 (0xffff) 00:23:00.155 Admin Max SQ Size: 32 00:23:00.155 Transport Service Identifier: 4420 00:23:00.155 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:00.155 Transport Address: 10.0.0.1 00:23:00.155 Discovery Log Entry 1 00:23:00.155 ---------------------- 00:23:00.155 Transport Type: 3 (TCP) 00:23:00.155 Address Family: 1 (IPv4) 00:23:00.155 Subsystem Type: 2 (NVM Subsystem) 00:23:00.155 Entry Flags: 00:23:00.155 Duplicate Returned Information: 0 00:23:00.155 Explicit Persistent Connection Support for Discovery: 0 00:23:00.155 Transport Requirements: 00:23:00.155 Secure Channel: Not Specified 00:23:00.155 Port ID: 1 (0x0001) 00:23:00.155 Controller ID: 65535 (0xffff) 00:23:00.155 Admin Max SQ Size: 32 00:23:00.155 Transport Service Identifier: 4420 00:23:00.155 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:00.155 Transport Address: 10.0.0.1 00:23:00.155 06:09:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:00.155 get_feature(0x01) failed 00:23:00.155 get_feature(0x02) failed 00:23:00.155 get_feature(0x04) failed 00:23:00.155 ===================================================== 00:23:00.155 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:00.155 ===================================================== 00:23:00.155 Controller Capabilities/Features 00:23:00.155 ================================ 00:23:00.155 Vendor ID: 0000 00:23:00.155 Subsystem Vendor ID: 0000 00:23:00.155 Serial Number: dcf557f4129436e95ce0 00:23:00.155 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:00.155 Firmware Version: 6.7.0-68 00:23:00.155 Recommended Arb Burst: 6 00:23:00.155 IEEE OUI Identifier: 00 00 00 00:23:00.155 Multi-path I/O 00:23:00.155 May have multiple subsystem ports: Yes 00:23:00.155 May have multiple controllers: Yes 00:23:00.155 Associated with SR-IOV VF: No 00:23:00.155 Max Data Transfer Size: Unlimited 00:23:00.155 Max Number of Namespaces: 1024 00:23:00.155 Max Number of I/O Queues: 128 00:23:00.155 NVMe Specification Version (VS): 1.3 00:23:00.155 NVMe Specification Version (Identify): 1.3 00:23:00.155 Maximum Queue Entries: 1024 00:23:00.155 Contiguous Queues Required: No 00:23:00.155 Arbitration Mechanisms Supported 00:23:00.155 Weighted Round Robin: Not Supported 00:23:00.155 Vendor Specific: Not Supported 00:23:00.155 Reset Timeout: 7500 ms 00:23:00.155 Doorbell Stride: 4 bytes 00:23:00.155 NVM Subsystem Reset: Not Supported 00:23:00.155 Command Sets Supported 00:23:00.155 NVM Command Set: Supported 00:23:00.155 Boot Partition: Not Supported 00:23:00.155 Memory Page Size Minimum: 4096 bytes 00:23:00.155 Memory Page Size Maximum: 4096 bytes 00:23:00.155 Persistent Memory Region: Not Supported 00:23:00.155 Optional Asynchronous Events Supported 00:23:00.155 Namespace Attribute Notices: Supported 00:23:00.155 Firmware Activation Notices: Not Supported 00:23:00.155 ANA Change Notices: Supported 00:23:00.155 PLE Aggregate Log Change Notices: Not Supported 00:23:00.155 LBA Status Info Alert Notices: Not Supported 00:23:00.155 EGE Aggregate Log Change Notices: Not Supported 00:23:00.155 Normal NVM Subsystem Shutdown event: Not Supported 00:23:00.155 Zone Descriptor Change Notices: Not Supported 00:23:00.155 Discovery Log Change Notices: Not Supported 00:23:00.155 Controller Attributes 00:23:00.155 128-bit Host Identifier: Supported 00:23:00.155 Non-Operational Permissive Mode: Not Supported 00:23:00.155 NVM Sets: Not Supported 00:23:00.155 Read Recovery Levels: Not Supported 00:23:00.155 Endurance Groups: Not Supported 00:23:00.155 Predictable Latency Mode: Not Supported 00:23:00.155 Traffic Based Keep ALive: Supported 00:23:00.155 Namespace Granularity: Not Supported 00:23:00.155 SQ Associations: Not Supported 00:23:00.155 UUID List: Not Supported 00:23:00.155 Multi-Domain Subsystem: Not Supported 00:23:00.155 Fixed Capacity Management: Not Supported 00:23:00.155 Variable Capacity Management: Not Supported 00:23:00.155 Delete Endurance Group: Not Supported 00:23:00.155 Delete NVM Set: Not Supported 00:23:00.155 Extended LBA Formats Supported: Not Supported 00:23:00.155 Flexible Data Placement Supported: Not Supported 00:23:00.155 00:23:00.155 Controller Memory Buffer Support 00:23:00.155 ================================ 00:23:00.155 Supported: No 00:23:00.155 00:23:00.155 Persistent Memory Region Support 00:23:00.155 ================================ 00:23:00.155 Supported: No 00:23:00.155 00:23:00.155 Admin Command Set Attributes 00:23:00.155 ============================ 00:23:00.155 Security Send/Receive: Not Supported 00:23:00.155 Format NVM: Not Supported 00:23:00.155 Firmware Activate/Download: Not Supported 00:23:00.155 Namespace Management: Not Supported 00:23:00.155 Device Self-Test: Not Supported 00:23:00.155 Directives: Not Supported 00:23:00.155 NVMe-MI: Not Supported 00:23:00.155 Virtualization Management: Not Supported 00:23:00.155 Doorbell Buffer Config: Not Supported 00:23:00.155 Get LBA Status Capability: Not Supported 00:23:00.155 Command & Feature Lockdown Capability: Not Supported 00:23:00.155 Abort Command Limit: 4 00:23:00.155 Async Event Request Limit: 4 00:23:00.155 Number of Firmware Slots: N/A 00:23:00.155 Firmware Slot 1 Read-Only: N/A 00:23:00.155 Firmware Activation Without Reset: N/A 00:23:00.155 Multiple Update Detection Support: N/A 00:23:00.155 Firmware Update Granularity: No Information Provided 00:23:00.155 Per-Namespace SMART Log: Yes 00:23:00.155 Asymmetric Namespace Access Log Page: Supported 00:23:00.155 ANA Transition Time : 10 sec 00:23:00.155 00:23:00.155 Asymmetric Namespace Access Capabilities 00:23:00.155 ANA Optimized State : Supported 00:23:00.155 ANA Non-Optimized State : Supported 00:23:00.155 ANA Inaccessible State : Supported 00:23:00.155 ANA Persistent Loss State : Supported 00:23:00.155 ANA Change State : Supported 00:23:00.155 ANAGRPID is not changed : No 00:23:00.155 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:00.155 00:23:00.155 ANA Group Identifier Maximum : 128 00:23:00.155 Number of ANA Group Identifiers : 128 00:23:00.155 Max Number of Allowed Namespaces : 1024 00:23:00.155 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:00.155 Command Effects Log Page: Supported 00:23:00.155 Get Log Page Extended Data: Supported 00:23:00.155 Telemetry Log Pages: Not Supported 00:23:00.155 Persistent Event Log Pages: Not Supported 00:23:00.155 Supported Log Pages Log Page: May Support 00:23:00.155 Commands Supported & Effects Log Page: Not Supported 00:23:00.155 Feature Identifiers & Effects Log Page:May Support 00:23:00.155 NVMe-MI Commands & Effects Log Page: May Support 00:23:00.155 Data Area 4 for Telemetry Log: Not Supported 00:23:00.155 Error Log Page Entries Supported: 128 00:23:00.155 Keep Alive: Supported 00:23:00.155 Keep Alive Granularity: 1000 ms 00:23:00.155 00:23:00.155 NVM Command Set Attributes 00:23:00.155 ========================== 00:23:00.155 Submission Queue Entry Size 00:23:00.155 Max: 64 00:23:00.155 Min: 64 00:23:00.155 Completion Queue Entry Size 00:23:00.155 Max: 16 00:23:00.155 Min: 16 00:23:00.155 Number of Namespaces: 1024 00:23:00.155 Compare Command: Not Supported 00:23:00.155 Write Uncorrectable Command: Not Supported 00:23:00.155 Dataset Management Command: Supported 00:23:00.155 Write Zeroes Command: Supported 00:23:00.155 Set Features Save Field: Not Supported 00:23:00.155 Reservations: Not Supported 00:23:00.155 Timestamp: Not Supported 00:23:00.155 Copy: Not Supported 00:23:00.155 Volatile Write Cache: Present 00:23:00.155 Atomic Write Unit (Normal): 1 00:23:00.155 Atomic Write Unit (PFail): 1 00:23:00.155 Atomic Compare & Write Unit: 1 00:23:00.155 Fused Compare & Write: Not Supported 00:23:00.155 Scatter-Gather List 00:23:00.155 SGL Command Set: Supported 00:23:00.155 SGL Keyed: Not Supported 00:23:00.155 SGL Bit Bucket Descriptor: Not Supported 00:23:00.155 SGL Metadata Pointer: Not Supported 00:23:00.155 Oversized SGL: Not Supported 00:23:00.155 SGL Metadata Address: Not Supported 00:23:00.155 SGL Offset: Supported 00:23:00.155 Transport SGL Data Block: Not Supported 00:23:00.155 Replay Protected Memory Block: Not Supported 00:23:00.155 00:23:00.155 Firmware Slot Information 00:23:00.155 ========================= 00:23:00.156 Active slot: 0 00:23:00.156 00:23:00.156 Asymmetric Namespace Access 00:23:00.156 =========================== 00:23:00.156 Change Count : 0 00:23:00.156 Number of ANA Group Descriptors : 1 00:23:00.156 ANA Group Descriptor : 0 00:23:00.156 ANA Group ID : 1 00:23:00.156 Number of NSID Values : 1 00:23:00.156 Change Count : 0 00:23:00.156 ANA State : 1 00:23:00.156 Namespace Identifier : 1 00:23:00.156 00:23:00.156 Commands Supported and Effects 00:23:00.156 ============================== 00:23:00.156 Admin Commands 00:23:00.156 -------------- 00:23:00.156 Get Log Page (02h): Supported 00:23:00.156 Identify (06h): Supported 00:23:00.156 Abort (08h): Supported 00:23:00.156 Set Features (09h): Supported 00:23:00.156 Get Features (0Ah): Supported 00:23:00.156 Asynchronous Event Request (0Ch): Supported 00:23:00.156 Keep Alive (18h): Supported 00:23:00.156 I/O Commands 00:23:00.156 ------------ 00:23:00.156 Flush (00h): Supported 00:23:00.156 Write (01h): Supported LBA-Change 00:23:00.156 Read (02h): Supported 00:23:00.156 Write Zeroes (08h): Supported LBA-Change 00:23:00.156 Dataset Management (09h): Supported 00:23:00.156 00:23:00.156 Error Log 00:23:00.156 ========= 00:23:00.156 Entry: 0 00:23:00.156 Error Count: 0x3 00:23:00.156 Submission Queue Id: 0x0 00:23:00.156 Command Id: 0x5 00:23:00.156 Phase Bit: 0 00:23:00.156 Status Code: 0x2 00:23:00.156 Status Code Type: 0x0 00:23:00.156 Do Not Retry: 1 00:23:00.415 Error Location: 0x28 00:23:00.415 LBA: 0x0 00:23:00.415 Namespace: 0x0 00:23:00.415 Vendor Log Page: 0x0 00:23:00.415 ----------- 00:23:00.415 Entry: 1 00:23:00.415 Error Count: 0x2 00:23:00.415 Submission Queue Id: 0x0 00:23:00.415 Command Id: 0x5 00:23:00.415 Phase Bit: 0 00:23:00.415 Status Code: 0x2 00:23:00.415 Status Code Type: 0x0 00:23:00.415 Do Not Retry: 1 00:23:00.415 Error Location: 0x28 00:23:00.415 LBA: 0x0 00:23:00.415 Namespace: 0x0 00:23:00.415 Vendor Log Page: 0x0 00:23:00.415 ----------- 00:23:00.415 Entry: 2 00:23:00.415 Error Count: 0x1 00:23:00.415 Submission Queue Id: 0x0 00:23:00.415 Command Id: 0x4 00:23:00.415 Phase Bit: 0 00:23:00.415 Status Code: 0x2 00:23:00.415 Status Code Type: 0x0 00:23:00.415 Do Not Retry: 1 00:23:00.415 Error Location: 0x28 00:23:00.415 LBA: 0x0 00:23:00.415 Namespace: 0x0 00:23:00.415 Vendor Log Page: 0x0 00:23:00.415 00:23:00.415 Number of Queues 00:23:00.415 ================ 00:23:00.415 Number of I/O Submission Queues: 128 00:23:00.415 Number of I/O Completion Queues: 128 00:23:00.415 00:23:00.415 ZNS Specific Controller Data 00:23:00.415 ============================ 00:23:00.415 Zone Append Size Limit: 0 00:23:00.415 00:23:00.415 00:23:00.415 Active Namespaces 00:23:00.415 ================= 00:23:00.415 get_feature(0x05) failed 00:23:00.415 Namespace ID:1 00:23:00.415 Command Set Identifier: NVM (00h) 00:23:00.415 Deallocate: Supported 00:23:00.415 Deallocated/Unwritten Error: Not Supported 00:23:00.415 Deallocated Read Value: Unknown 00:23:00.415 Deallocate in Write Zeroes: Not Supported 00:23:00.415 Deallocated Guard Field: 0xFFFF 00:23:00.415 Flush: Supported 00:23:00.415 Reservation: Not Supported 00:23:00.415 Namespace Sharing Capabilities: Multiple Controllers 00:23:00.415 Size (in LBAs): 1310720 (5GiB) 00:23:00.415 Capacity (in LBAs): 1310720 (5GiB) 00:23:00.415 Utilization (in LBAs): 1310720 (5GiB) 00:23:00.415 UUID: 866ed33a-106e-4931-aa04-c5d3c595cf05 00:23:00.415 Thin Provisioning: Not Supported 00:23:00.415 Per-NS Atomic Units: Yes 00:23:00.415 Atomic Boundary Size (Normal): 0 00:23:00.415 Atomic Boundary Size (PFail): 0 00:23:00.415 Atomic Boundary Offset: 0 00:23:00.415 NGUID/EUI64 Never Reused: No 00:23:00.415 ANA group ID: 1 00:23:00.415 Namespace Write Protected: No 00:23:00.415 Number of LBA Formats: 1 00:23:00.415 Current LBA Format: LBA Format #00 00:23:00.415 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:00.415 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:00.415 rmmod nvme_tcp 00:23:00.415 rmmod nvme_fabrics 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:00.415 06:09:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:01.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:01.352 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:01.352 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:01.352 ************************************ 00:23:01.352 END TEST nvmf_identify_kernel_target 00:23:01.352 ************************************ 00:23:01.352 00:23:01.352 real 0m3.074s 00:23:01.352 user 0m1.084s 00:23:01.352 sys 0m1.450s 00:23:01.352 06:09:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:01.352 06:09:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.352 06:09:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:01.352 06:09:17 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:01.352 06:09:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:01.352 06:09:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:01.352 06:09:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:01.352 ************************************ 00:23:01.352 START TEST nvmf_auth_host 00:23:01.352 ************************************ 00:23:01.352 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:01.611 * Looking for test storage... 00:23:01.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.611 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:01.612 Cannot find device "nvmf_tgt_br" 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:01.612 Cannot find device "nvmf_tgt_br2" 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:01.612 Cannot find device "nvmf_tgt_br" 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:01.612 Cannot find device "nvmf_tgt_br2" 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:01.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:01.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:01.612 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:01.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:23:01.872 00:23:01.872 --- 10.0.0.2 ping statistics --- 00:23:01.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.872 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:01.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:01.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:23:01.872 00:23:01.872 --- 10.0.0.3 ping statistics --- 00:23:01.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.872 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:01.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:01.872 00:23:01.872 --- 10.0.0.1 ping statistics --- 00:23:01.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.872 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=84484 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 84484 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 84484 ']' 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.872 06:09:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.250 06:09:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.250 06:09:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:03.250 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.250 06:09:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:03.250 06:09:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.250 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.250 06:09:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d819cd3f22b45c34d5fd89b58f278d07 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zij 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d819cd3f22b45c34d5fd89b58f278d07 0 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d819cd3f22b45c34d5fd89b58f278d07 0 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d819cd3f22b45c34d5fd89b58f278d07 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zij 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zij 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zij 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b8e8b585e9b9aa8ecb1da001850b850768bc3ece0ac97cbd7110a8047aa00e86 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LZ4 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b8e8b585e9b9aa8ecb1da001850b850768bc3ece0ac97cbd7110a8047aa00e86 3 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b8e8b585e9b9aa8ecb1da001850b850768bc3ece0ac97cbd7110a8047aa00e86 3 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b8e8b585e9b9aa8ecb1da001850b850768bc3ece0ac97cbd7110a8047aa00e86 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LZ4 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LZ4 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.LZ4 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ebfc329fe16c146e3149f9ff9bb51c7dde7c382b97b0c58 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Zqb 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ebfc329fe16c146e3149f9ff9bb51c7dde7c382b97b0c58 0 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ebfc329fe16c146e3149f9ff9bb51c7dde7c382b97b0c58 0 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ebfc329fe16c146e3149f9ff9bb51c7dde7c382b97b0c58 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:03.251 06:09:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Zqb 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Zqb 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Zqb 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b1be1d17814ef2cffe460856fe16f8541388626ff561a9e4 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zUA 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b1be1d17814ef2cffe460856fe16f8541388626ff561a9e4 2 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b1be1d17814ef2cffe460856fe16f8541388626ff561a9e4 2 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b1be1d17814ef2cffe460856fe16f8541388626ff561a9e4 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zUA 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zUA 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zUA 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=19abe10041a440b0cd8254336d354128 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1r3 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 19abe10041a440b0cd8254336d354128 1 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 19abe10041a440b0cd8254336d354128 1 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=19abe10041a440b0cd8254336d354128 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1r3 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1r3 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1r3 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:03.251 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=858e525539cd57e32c163942097af3ec 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lbf 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 858e525539cd57e32c163942097af3ec 1 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 858e525539cd57e32c163942097af3ec 1 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=858e525539cd57e32c163942097af3ec 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lbf 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lbf 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.lbf 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9a6666100ead32b0bff0ba49ae7b1bc014d26bc9a0ab4217 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JQ4 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9a6666100ead32b0bff0ba49ae7b1bc014d26bc9a0ab4217 2 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9a6666100ead32b0bff0ba49ae7b1bc014d26bc9a0ab4217 2 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9a6666100ead32b0bff0ba49ae7b1bc014d26bc9a0ab4217 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JQ4 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JQ4 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.JQ4 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=44985d7e68e3924f21cd6a999a1d9ef9 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.KzW 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 44985d7e68e3924f21cd6a999a1d9ef9 0 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 44985d7e68e3924f21cd6a999a1d9ef9 0 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=44985d7e68e3924f21cd6a999a1d9ef9 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.KzW 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.KzW 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.KzW 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e0e8e508793c74d52826b9a2e0bede8d6087054fdb0339e697f913839181fedb 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gZI 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e0e8e508793c74d52826b9a2e0bede8d6087054fdb0339e697f913839181fedb 3 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e0e8e508793c74d52826b9a2e0bede8d6087054fdb0339e697f913839181fedb 3 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e0e8e508793c74d52826b9a2e0bede8d6087054fdb0339e697f913839181fedb 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gZI 00:23:03.511 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gZI 00:23:03.770 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.gZI 00:23:03.770 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:03.770 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 84484 00:23:03.770 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 84484 ']' 00:23:03.770 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.770 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.770 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.770 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.770 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zij 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.LZ4 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LZ4 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Zqb 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zUA ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zUA 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1r3 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.lbf ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lbf 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.JQ4 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.KzW ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.KzW 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gZI 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:04.030 06:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:04.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:04.288 Waiting for block devices as requested 00:23:04.547 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:04.547 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:05.148 06:09:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:05.148 06:09:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:05.148 06:09:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:05.148 06:09:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:05.148 06:09:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:05.148 06:09:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:05.148 06:09:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:05.148 06:09:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:05.148 06:09:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:05.148 No valid GPT data, bailing 00:23:05.148 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:05.148 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:05.148 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:05.148 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:05.148 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:05.148 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:05.148 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:05.148 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:05.149 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:05.149 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:05.149 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:05.149 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:05.149 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:05.415 No valid GPT data, bailing 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:05.415 No valid GPT data, bailing 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:05.415 No valid GPT data, bailing 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -a 10.0.0.1 -t tcp -s 4420 00:23:05.415 00:23:05.415 Discovery Log Number of Records 2, Generation counter 2 00:23:05.415 =====Discovery Log Entry 0====== 00:23:05.415 trtype: tcp 00:23:05.415 adrfam: ipv4 00:23:05.415 subtype: current discovery subsystem 00:23:05.415 treq: not specified, sq flow control disable supported 00:23:05.415 portid: 1 00:23:05.415 trsvcid: 4420 00:23:05.415 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:05.415 traddr: 10.0.0.1 00:23:05.415 eflags: none 00:23:05.415 sectype: none 00:23:05.415 =====Discovery Log Entry 1====== 00:23:05.415 trtype: tcp 00:23:05.415 adrfam: ipv4 00:23:05.415 subtype: nvme subsystem 00:23:05.415 treq: not specified, sq flow control disable supported 00:23:05.415 portid: 1 00:23:05.415 trsvcid: 4420 00:23:05.415 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:05.415 traddr: 10.0.0.1 00:23:05.415 eflags: none 00:23:05.415 sectype: none 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.415 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.416 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:05.416 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:05.416 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.416 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.674 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.675 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.675 nvme0n1 00:23:05.675 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.675 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.675 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.675 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.675 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.934 nvme0n1 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.934 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.193 nvme0n1 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.193 06:09:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.193 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.194 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.453 nvme0n1 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.453 nvme0n1 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.453 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.712 nvme0n1 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.712 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.713 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.280 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:07.280 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:07.280 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:07.280 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:07.280 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.281 06:09:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.281 nvme0n1 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.281 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.541 nvme0n1 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.541 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.813 nvme0n1 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.813 nvme0n1 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.813 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.073 nvme0n1 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.073 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.331 06:09:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:08.900 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:08.900 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:08.900 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:08.900 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:08.900 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.900 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.901 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.159 nvme0n1 00:23:09.159 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.159 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.159 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.159 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.159 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.159 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.160 06:09:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.160 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.419 nvme0n1 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.419 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.678 nvme0n1 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.678 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.679 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.679 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.679 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.679 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.679 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:09.679 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.679 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.937 nvme0n1 00:23:09.937 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.937 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.937 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.937 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.937 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.937 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.196 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.197 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.197 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.197 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.197 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.197 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.197 06:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.197 06:09:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:10.197 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.197 06:09:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.455 nvme0n1 00:23:10.455 06:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.455 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.455 06:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.455 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.455 06:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.455 06:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.455 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.456 06:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.356 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.357 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.922 nvme0n1 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.922 06:09:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.181 nvme0n1 00:23:13.181 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.181 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.181 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.181 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.181 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.440 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.698 nvme0n1 00:23:13.698 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.698 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.698 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.698 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.698 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.698 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.956 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.956 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.956 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.957 06:09:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.215 nvme0n1 00:23:14.215 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.215 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.215 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.215 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.215 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.215 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.215 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.215 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.215 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.215 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.473 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.473 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.474 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.732 nvme0n1 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.732 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.733 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:14.733 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:14.733 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.733 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:14.733 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.733 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.991 06:09:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.559 nvme0n1 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.559 06:09:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.560 06:09:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.496 nvme0n1 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.496 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.063 nvme0n1 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.063 06:09:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.630 nvme0n1 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:17.630 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.889 06:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.456 nvme0n1 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:18.456 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.457 nvme0n1 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.457 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.716 nvme0n1 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.716 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.717 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.717 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.717 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.975 nvme0n1 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.975 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.976 nvme0n1 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.976 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.238 06:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.238 nvme0n1 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.238 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.239 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.239 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.239 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.239 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.239 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.239 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.239 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.239 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.239 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.239 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.503 nvme0n1 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.503 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.762 nvme0n1 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.762 nvme0n1 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.762 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.021 nvme0n1 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.021 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.281 06:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.281 nvme0n1 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.281 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.541 nvme0n1 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.541 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.799 nvme0n1 00:23:20.799 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.799 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.799 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.799 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.799 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.799 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.799 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.799 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.799 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.799 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.057 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.057 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.057 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:21.057 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.057 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.057 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.057 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:21.057 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:21.057 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:21.057 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.058 nvme0n1 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.058 06:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.316 06:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.316 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.574 nvme0n1 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.574 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.832 nvme0n1 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.832 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.833 06:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.400 nvme0n1 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.400 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.658 nvme0n1 00:23:22.658 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.658 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.658 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.658 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.658 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.916 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.917 06:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.174 nvme0n1 00:23:23.174 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.174 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.174 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.174 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.174 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.174 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.432 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.690 nvme0n1 00:23:23.690 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.690 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.690 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.690 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.690 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.690 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.948 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.948 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.948 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.948 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.948 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.948 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.949 06:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.207 nvme0n1 00:23:24.207 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.207 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.207 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.207 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.207 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.207 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.207 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.207 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.207 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.207 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.466 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.032 nvme0n1 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:25.032 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.033 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.290 06:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.855 nvme0n1 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:25.855 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.856 06:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.790 nvme0n1 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.790 06:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.726 nvme0n1 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.726 06:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.291 nvme0n1 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.291 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.292 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.550 nvme0n1 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.550 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 nvme0n1 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 nvme0n1 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.810 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:29.069 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.070 nvme0n1 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.070 06:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.329 nvme0n1 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.329 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.587 nvme0n1 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.588 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.846 nvme0n1 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.846 nvme0n1 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.846 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.105 nvme0n1 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.105 06:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.365 nvme0n1 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.365 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.625 nvme0n1 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.625 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.885 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.145 nvme0n1 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.145 06:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.405 nvme0n1 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.405 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.664 nvme0n1 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.664 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.923 nvme0n1 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:31.923 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.182 06:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.441 nvme0n1 00:23:32.441 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.441 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.441 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.441 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.441 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.441 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.441 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.442 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.701 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 nvme0n1 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.960 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.961 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.961 06:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.961 06:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.961 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.961 06:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.540 nvme0n1 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.540 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.112 nvme0n1 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.112 06:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.113 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.113 06:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.371 nvme0n1 00:23:34.371 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.371 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.371 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.371 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.371 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.371 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.630 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgxOWNkM2YyMmI0NWMzNGQ1ZmQ4OWI1OGYyNzhkMDdO3vRn: 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: ]] 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjhlOGI1ODVlOWI5YWE4ZWNiMWRhMDAxODUwYjg1MDc2OGJjM2VjZTBhYzk3Y2JkNzExMGE4MDQ3YWEwMGU4NoK/ugk=: 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.631 06:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.198 nvme0n1 00:23:35.198 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.198 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.198 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.198 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.198 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.198 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.458 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.026 nvme0n1 00:23:36.026 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.026 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.026 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.026 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.026 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.026 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.026 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.026 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.026 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.026 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.285 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTlhYmUxMDA0MWE0NDBiMGNkODI1NDMzNmQzNTQxMjhVv3K/: 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: ]] 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU4ZTUyNTUzOWNkNTdlMzJjMTYzOTQyMDk3YWYzZWP32S7v: 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.286 06:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.854 nvme0n1 00:23:36.854 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.854 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.854 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.854 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.854 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.854 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.854 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.854 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.854 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.854 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWE2NjY2MTAwZWFkMzJiMGJmZjBiYTQ5YWU3YjFiYzAxNGQyNmJjOWEwYWI0MjE3SU1ACg==: 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: ]] 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDQ5ODVkN2U2OGUzOTI0ZjIxY2Q2YTk5OWExZDllZjnyw+aC: 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.113 06:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.682 nvme0n1 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.682 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTBlOGU1MDg3OTNjNzRkNTI4MjZiOWEyZTBiZWRlOGQ2MDg3MDU0ZmRiMDMzOWU2OTdmOTEzODM5MTgxZmVkYhXhr8E=: 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.683 06:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.616 nvme0n1 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGViZmMzMjlmZTE2YzE0NmUzMTQ5ZjlmZjliYjUxYzdkZGU3YzM4MmI5N2IwYzU4WPSSzw==: 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: ]] 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjFiZTFkMTc4MTRlZjJjZmZlNDYwODU2ZmUxNmY4NTQxMzg4NjI2ZmY1NjFhOWU0Owgczw==: 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.616 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.616 request: 00:23:38.616 { 00:23:38.616 "name": "nvme0", 00:23:38.617 "trtype": "tcp", 00:23:38.617 "traddr": "10.0.0.1", 00:23:38.617 "adrfam": "ipv4", 00:23:38.617 "trsvcid": "4420", 00:23:38.617 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:38.617 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:38.617 "prchk_reftag": false, 00:23:38.617 "prchk_guard": false, 00:23:38.617 "hdgst": false, 00:23:38.617 "ddgst": false, 00:23:38.617 "method": "bdev_nvme_attach_controller", 00:23:38.617 "req_id": 1 00:23:38.617 } 00:23:38.617 Got JSON-RPC error response 00:23:38.617 response: 00:23:38.617 { 00:23:38.617 "code": -5, 00:23:38.617 "message": "Input/output error" 00:23:38.617 } 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.617 request: 00:23:38.617 { 00:23:38.617 "name": "nvme0", 00:23:38.617 "trtype": "tcp", 00:23:38.617 "traddr": "10.0.0.1", 00:23:38.617 "adrfam": "ipv4", 00:23:38.617 "trsvcid": "4420", 00:23:38.617 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:38.617 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:38.617 "prchk_reftag": false, 00:23:38.617 "prchk_guard": false, 00:23:38.617 "hdgst": false, 00:23:38.617 "ddgst": false, 00:23:38.617 "dhchap_key": "key2", 00:23:38.617 "method": "bdev_nvme_attach_controller", 00:23:38.617 "req_id": 1 00:23:38.617 } 00:23:38.617 Got JSON-RPC error response 00:23:38.617 response: 00:23:38.617 { 00:23:38.617 "code": -5, 00:23:38.617 "message": "Input/output error" 00:23:38.617 } 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.617 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.876 request: 00:23:38.876 { 00:23:38.876 "name": "nvme0", 00:23:38.876 "trtype": "tcp", 00:23:38.876 "traddr": "10.0.0.1", 00:23:38.876 "adrfam": "ipv4", 00:23:38.876 "trsvcid": "4420", 00:23:38.876 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:38.876 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:38.876 "prchk_reftag": false, 00:23:38.876 "prchk_guard": false, 00:23:38.876 "hdgst": false, 00:23:38.876 "ddgst": false, 00:23:38.876 "dhchap_key": "key1", 00:23:38.876 "dhchap_ctrlr_key": "ckey2", 00:23:38.876 "method": "bdev_nvme_attach_controller", 00:23:38.876 "req_id": 1 00:23:38.876 } 00:23:38.876 Got JSON-RPC error response 00:23:38.876 response: 00:23:38.876 { 00:23:38.876 "code": -5, 00:23:38.876 "message": "Input/output error" 00:23:38.876 } 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:38.876 rmmod nvme_tcp 00:23:38.876 rmmod nvme_fabrics 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:38.876 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 84484 ']' 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 84484 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 84484 ']' 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 84484 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84484 00:23:38.877 killing process with pid 84484 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84484' 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 84484 00:23:38.877 06:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 84484 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:40.250 06:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:40.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:40.816 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:41.073 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:41.073 06:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zij /tmp/spdk.key-null.Zqb /tmp/spdk.key-sha256.1r3 /tmp/spdk.key-sha384.JQ4 /tmp/spdk.key-sha512.gZI /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:23:41.073 06:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:41.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:41.330 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:41.330 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:41.330 00:23:41.330 real 0m39.997s 00:23:41.330 user 0m35.016s 00:23:41.330 sys 0m4.160s 00:23:41.330 06:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:41.330 06:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.330 ************************************ 00:23:41.330 END TEST nvmf_auth_host 00:23:41.330 ************************************ 00:23:41.587 06:09:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:41.587 06:09:57 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:41.587 06:09:57 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:41.587 06:09:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:41.587 06:09:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.587 06:09:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:41.587 ************************************ 00:23:41.587 START TEST nvmf_digest 00:23:41.587 ************************************ 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:41.587 * Looking for test storage... 00:23:41.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.587 06:09:57 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:41.588 Cannot find device "nvmf_tgt_br" 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:41.588 Cannot find device "nvmf_tgt_br2" 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:41.588 Cannot find device "nvmf_tgt_br" 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:41.588 Cannot find device "nvmf_tgt_br2" 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:23:41.588 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:41.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:41.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:41.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:23:41.846 00:23:41.846 --- 10.0.0.2 ping statistics --- 00:23:41.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.846 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:41.846 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:41.846 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:41.846 00:23:41.846 --- 10.0.0.3 ping statistics --- 00:23:41.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.846 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:41.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:23:41.846 00:23:41.846 --- 10.0.0.1 ping statistics --- 00:23:41.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.846 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.846 06:09:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:42.105 ************************************ 00:23:42.105 START TEST nvmf_digest_clean 00:23:42.105 ************************************ 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:42.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=86087 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 86087 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86087 ']' 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.105 06:09:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:42.105 [2024-07-11 06:09:57.913793] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:23:42.105 [2024-07-11 06:09:57.914010] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.364 [2024-07-11 06:09:58.094638] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.622 [2024-07-11 06:09:58.343385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.622 [2024-07-11 06:09:58.343472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.622 [2024-07-11 06:09:58.343488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.622 [2024-07-11 06:09:58.343501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.622 [2024-07-11 06:09:58.343512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.622 [2024-07-11 06:09:58.343548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.189 06:09:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:43.447 [2024-07-11 06:09:59.155313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:43.447 null0 00:23:43.447 [2024-07-11 06:09:59.284546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.447 [2024-07-11 06:09:59.308642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.447 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.447 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:43.447 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:43.447 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86123 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86123 /var/tmp/bperf.sock 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86123 ']' 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:43.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.448 06:09:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:43.707 [2024-07-11 06:09:59.425604] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:23:43.707 [2024-07-11 06:09:59.425974] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86123 ] 00:23:43.707 [2024-07-11 06:09:59.604117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.965 [2024-07-11 06:09:59.844859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.532 06:10:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.532 06:10:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:44.532 06:10:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:44.532 06:10:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:44.532 06:10:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:45.100 [2024-07-11 06:10:00.873066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:45.100 06:10:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:45.100 06:10:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:45.665 nvme0n1 00:23:45.666 06:10:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:45.666 06:10:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:45.666 Running I/O for 2 seconds... 00:23:47.568 00:23:47.568 Latency(us) 00:23:47.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.568 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:47.568 nvme0n1 : 2.01 10640.28 41.56 0.00 0.00 12018.98 10962.39 28120.90 00:23:47.568 =================================================================================================================== 00:23:47.568 Total : 10640.28 41.56 0.00 0.00 12018.98 10962.39 28120.90 00:23:47.568 0 00:23:47.825 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:47.825 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:47.825 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:47.825 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:47.825 | select(.opcode=="crc32c") 00:23:47.825 | "\(.module_name) \(.executed)"' 00:23:47.825 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86123 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86123 ']' 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86123 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86123 00:23:48.083 killing process with pid 86123 00:23:48.083 Received shutdown signal, test time was about 2.000000 seconds 00:23:48.083 00:23:48.083 Latency(us) 00:23:48.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.083 =================================================================================================================== 00:23:48.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86123' 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86123 00:23:48.083 06:10:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86123 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86201 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86201 /var/tmp/bperf.sock 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86201 ']' 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:49.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.024 06:10:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:49.283 [2024-07-11 06:10:05.054777] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:23:49.283 [2024-07-11 06:10:05.055448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86201 ] 00:23:49.283 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:49.283 Zero copy mechanism will not be used. 00:23:49.542 [2024-07-11 06:10:05.235725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.801 [2024-07-11 06:10:05.467550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.369 06:10:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.369 06:10:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:50.369 06:10:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:50.369 06:10:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:50.369 06:10:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:50.628 [2024-07-11 06:10:06.530228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:50.887 06:10:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:50.887 06:10:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:51.146 nvme0n1 00:23:51.146 06:10:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:51.146 06:10:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:51.405 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:51.405 Zero copy mechanism will not be used. 00:23:51.405 Running I/O for 2 seconds... 00:23:53.308 00:23:53.308 Latency(us) 00:23:53.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.308 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:53.308 nvme0n1 : 2.00 5410.10 676.26 0.00 0.00 2952.72 2546.97 4379.00 00:23:53.308 =================================================================================================================== 00:23:53.308 Total : 5410.10 676.26 0.00 0.00 2952.72 2546.97 4379.00 00:23:53.308 0 00:23:53.308 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:53.308 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:53.308 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:53.308 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:53.308 | select(.opcode=="crc32c") 00:23:53.308 | "\(.module_name) \(.executed)"' 00:23:53.308 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86201 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86201 ']' 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86201 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86201 00:23:53.567 killing process with pid 86201 00:23:53.567 Received shutdown signal, test time was about 2.000000 seconds 00:23:53.567 00:23:53.567 Latency(us) 00:23:53.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.567 =================================================================================================================== 00:23:53.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86201' 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86201 00:23:53.567 06:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86201 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86275 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86275 /var/tmp/bperf.sock 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86275 ']' 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:54.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.946 06:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:54.946 [2024-07-11 06:10:10.804019] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:23:54.946 [2024-07-11 06:10:10.804498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86275 ] 00:23:55.218 [2024-07-11 06:10:10.979410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.477 [2024-07-11 06:10:11.190014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.041 06:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.041 06:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:56.041 06:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:56.041 06:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:56.041 06:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:56.607 [2024-07-11 06:10:12.223840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:56.607 06:10:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:56.607 06:10:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:56.866 nvme0n1 00:23:56.866 06:10:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:56.866 06:10:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:56.866 Running I/O for 2 seconds... 00:23:59.398 00:23:59.399 Latency(us) 00:23:59.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.399 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:59.399 nvme0n1 : 2.01 11426.98 44.64 0.00 0.00 11189.83 3381.06 21686.46 00:23:59.399 =================================================================================================================== 00:23:59.399 Total : 11426.98 44.64 0.00 0.00 11189.83 3381.06 21686.46 00:23:59.399 0 00:23:59.399 06:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:59.399 06:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:59.399 06:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:59.399 | select(.opcode=="crc32c") 00:23:59.399 | "\(.module_name) \(.executed)"' 00:23:59.399 06:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:59.399 06:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86275 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86275 ']' 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86275 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86275 00:23:59.399 killing process with pid 86275 00:23:59.399 Received shutdown signal, test time was about 2.000000 seconds 00:23:59.399 00:23:59.399 Latency(us) 00:23:59.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.399 =================================================================================================================== 00:23:59.399 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86275' 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86275 00:23:59.399 06:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86275 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86342 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86342 /var/tmp/bperf.sock 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 86342 ']' 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:00.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.774 06:10:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.774 [2024-07-11 06:10:16.370233] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:00.774 [2024-07-11 06:10:16.370635] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86342 ] 00:24:00.774 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:00.774 Zero copy mechanism will not be used. 00:24:00.774 [2024-07-11 06:10:16.549294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.032 [2024-07-11 06:10:16.751864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.599 06:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.599 06:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:01.599 06:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:01.599 06:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:01.599 06:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:01.857 [2024-07-11 06:10:17.763301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:02.116 06:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.116 06:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.374 nvme0n1 00:24:02.374 06:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:02.374 06:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.632 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:02.632 Zero copy mechanism will not be used. 00:24:02.632 Running I/O for 2 seconds... 00:24:04.547 00:24:04.547 Latency(us) 00:24:04.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.547 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:04.547 nvme0n1 : 2.00 5029.03 628.63 0.00 0.00 3173.08 2278.87 10724.07 00:24:04.547 =================================================================================================================== 00:24:04.547 Total : 5029.03 628.63 0.00 0.00 3173.08 2278.87 10724.07 00:24:04.547 0 00:24:04.547 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:04.547 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:04.547 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:04.547 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:04.547 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:04.547 | select(.opcode=="crc32c") 00:24:04.547 | "\(.module_name) \(.executed)"' 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86342 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86342 ']' 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86342 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86342 00:24:04.806 killing process with pid 86342 00:24:04.806 Received shutdown signal, test time was about 2.000000 seconds 00:24:04.806 00:24:04.806 Latency(us) 00:24:04.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.806 =================================================================================================================== 00:24:04.806 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86342' 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86342 00:24:04.806 06:10:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86342 00:24:05.744 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 86087 00:24:05.744 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 86087 ']' 00:24:05.744 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 86087 00:24:05.744 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:06.003 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.003 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86087 00:24:06.003 killing process with pid 86087 00:24:06.003 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:06.003 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:06.003 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86087' 00:24:06.003 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 86087 00:24:06.003 06:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 86087 00:24:06.939 ************************************ 00:24:06.939 END TEST nvmf_digest_clean 00:24:06.939 ************************************ 00:24:06.939 00:24:06.939 real 0m24.858s 00:24:06.939 user 0m47.959s 00:24:06.939 sys 0m4.832s 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:06.939 ************************************ 00:24:06.939 START TEST nvmf_digest_error 00:24:06.939 ************************************ 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:06.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=86444 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 86444 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86444 ']' 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.939 06:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:06.939 [2024-07-11 06:10:22.822528] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:06.939 [2024-07-11 06:10:22.822711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.198 [2024-07-11 06:10:22.996945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.457 [2024-07-11 06:10:23.166023] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.457 [2024-07-11 06:10:23.166106] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.457 [2024-07-11 06:10:23.166123] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.457 [2024-07-11 06:10:23.166136] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.457 [2024-07-11 06:10:23.166146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.457 [2024-07-11 06:10:23.166182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.025 [2024-07-11 06:10:23.775072] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.025 06:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.025 [2024-07-11 06:10:23.936289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:08.283 null0 00:24:08.283 [2024-07-11 06:10:24.038541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.283 [2024-07-11 06:10:24.062632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86482 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86482 /var/tmp/bperf.sock 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86482 ']' 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:08.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.283 06:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.541 [2024-07-11 06:10:24.222196] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:08.541 [2024-07-11 06:10:24.222584] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86482 ] 00:24:08.541 [2024-07-11 06:10:24.396463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.800 [2024-07-11 06:10:24.566000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.059 [2024-07-11 06:10:24.725188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:09.318 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.318 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:09.318 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:09.318 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:09.577 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:09.577 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.577 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.577 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.577 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.577 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.836 nvme0n1 00:24:09.836 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:09.836 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.836 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.836 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.836 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:09.836 06:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:10.095 Running I/O for 2 seconds... 00:24:10.095 [2024-07-11 06:10:25.801660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.801813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.801842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.095 [2024-07-11 06:10:25.822129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.822197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.822217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.095 [2024-07-11 06:10:25.841736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.841798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.841821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.095 [2024-07-11 06:10:25.861210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.861277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.861298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.095 [2024-07-11 06:10:25.880623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.880732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.880757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.095 [2024-07-11 06:10:25.899873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.899943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.899963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.095 [2024-07-11 06:10:25.919166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.919227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.919249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.095 [2024-07-11 06:10:25.938890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.938963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.938984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.095 [2024-07-11 06:10:25.958918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.958979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.959001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.095 [2024-07-11 06:10:25.979400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.979471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.979491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.095 [2024-07-11 06:10:25.999337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.095 [2024-07-11 06:10:25.999417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.095 [2024-07-11 06:10:25.999455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.354 [2024-07-11 06:10:26.022959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.354 [2024-07-11 06:10:26.023061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.354 [2024-07-11 06:10:26.023081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.354 [2024-07-11 06:10:26.043051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.043114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.043136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.064120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.064187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.064207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.084498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.084551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.084580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.105473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.105545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.105566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.126342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.126404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.126426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.147211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.147278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.147297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.168488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.168541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.168566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.188129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.188196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.188216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.207893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.207954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.207977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.228506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.228568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.228602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.251772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.251837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.251859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.355 [2024-07-11 06:10:26.272546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.355 [2024-07-11 06:10:26.272639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.355 [2024-07-11 06:10:26.272674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.614 [2024-07-11 06:10:26.293123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.614 [2024-07-11 06:10:26.293184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.614 [2024-07-11 06:10:26.293209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.614 [2024-07-11 06:10:26.312737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.614 [2024-07-11 06:10:26.312819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.614 [2024-07-11 06:10:26.312838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.614 [2024-07-11 06:10:26.332324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.614 [2024-07-11 06:10:26.332376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.614 [2024-07-11 06:10:26.332405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.614 [2024-07-11 06:10:26.352106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.614 [2024-07-11 06:10:26.352172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.614 [2024-07-11 06:10:26.352191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.614 [2024-07-11 06:10:26.371518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.614 [2024-07-11 06:10:26.371579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.614 [2024-07-11 06:10:26.371600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.614 [2024-07-11 06:10:26.390746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.614 [2024-07-11 06:10:26.390811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.614 [2024-07-11 06:10:26.390829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.614 [2024-07-11 06:10:26.410387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.614 [2024-07-11 06:10:26.410450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.614 [2024-07-11 06:10:26.410473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.614 [2024-07-11 06:10:26.433037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.614 [2024-07-11 06:10:26.433108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.614 [2024-07-11 06:10:26.433144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.614 [2024-07-11 06:10:26.454095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.614 [2024-07-11 06:10:26.454168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.615 [2024-07-11 06:10:26.454191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.615 [2024-07-11 06:10:26.473828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.615 [2024-07-11 06:10:26.473894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.615 [2024-07-11 06:10:26.473914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.615 [2024-07-11 06:10:26.494383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.615 [2024-07-11 06:10:26.494446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.615 [2024-07-11 06:10:26.494471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.615 [2024-07-11 06:10:26.514096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.615 [2024-07-11 06:10:26.514163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.615 [2024-07-11 06:10:26.514182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.615 [2024-07-11 06:10:26.534280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.615 [2024-07-11 06:10:26.534372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.615 [2024-07-11 06:10:26.534395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.555024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.555091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.555110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.574339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.574402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.574426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.593765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.593835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.593854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.613047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.613106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.613129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.632410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.632482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.632503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.651746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.651806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.651829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.673748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.673836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.673866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.696875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.696943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.696983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.718837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.718906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.718927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.740245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.740334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.740361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.761814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.761872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.761910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.874 [2024-07-11 06:10:26.783822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:10.874 [2024-07-11 06:10:26.783885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.874 [2024-07-11 06:10:26.783908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:26.805108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:26.805197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:26.805219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:26.827448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:26.827524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:26.827549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:26.849180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:26.849252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:26.849273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:26.870176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:26.870242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:26.870266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:26.890847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:26.890917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:26.890937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:26.911213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:26.911277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:26.911300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:26.932455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:26.932531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:26.932553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:26.953422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:26.953486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:26.953509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:26.973945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:26.974013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:26.974033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:26.995104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:26.995167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:26.995193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:27.016743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:27.016817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:27.016839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.134 [2024-07-11 06:10:27.040830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.134 [2024-07-11 06:10:27.040902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.134 [2024-07-11 06:10:27.040930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.393 [2024-07-11 06:10:27.062955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.393 [2024-07-11 06:10:27.063027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.393 [2024-07-11 06:10:27.063047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.393 [2024-07-11 06:10:27.082937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.393 [2024-07-11 06:10:27.083001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.393 [2024-07-11 06:10:27.083023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.393 [2024-07-11 06:10:27.111443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.393 [2024-07-11 06:10:27.111507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.393 [2024-07-11 06:10:27.111529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.393 [2024-07-11 06:10:27.130723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.393 [2024-07-11 06:10:27.130790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.393 [2024-07-11 06:10:27.130810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.393 [2024-07-11 06:10:27.150053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.394 [2024-07-11 06:10:27.150115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.394 [2024-07-11 06:10:27.150138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.394 [2024-07-11 06:10:27.169392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.394 [2024-07-11 06:10:27.169459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.394 [2024-07-11 06:10:27.169479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.394 [2024-07-11 06:10:27.188626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.394 [2024-07-11 06:10:27.188717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.394 [2024-07-11 06:10:27.188780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.394 [2024-07-11 06:10:27.207871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.394 [2024-07-11 06:10:27.207937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.394 [2024-07-11 06:10:27.207957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.394 [2024-07-11 06:10:27.227008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.394 [2024-07-11 06:10:27.227053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.394 [2024-07-11 06:10:27.227091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.394 [2024-07-11 06:10:27.246612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.394 [2024-07-11 06:10:27.246741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.394 [2024-07-11 06:10:27.246764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.394 [2024-07-11 06:10:27.269958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.394 [2024-07-11 06:10:27.270024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.394 [2024-07-11 06:10:27.270063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.394 [2024-07-11 06:10:27.291080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.394 [2024-07-11 06:10:27.291146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.394 [2024-07-11 06:10:27.291165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.394 [2024-07-11 06:10:27.310745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.394 [2024-07-11 06:10:27.310822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.394 [2024-07-11 06:10:27.310843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.653 [2024-07-11 06:10:27.332752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.653 [2024-07-11 06:10:27.332814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.653 [2024-07-11 06:10:27.332841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.653 [2024-07-11 06:10:27.352403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.653 [2024-07-11 06:10:27.352477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.653 [2024-07-11 06:10:27.352500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.653 [2024-07-11 06:10:27.373270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.653 [2024-07-11 06:10:27.373349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.653 [2024-07-11 06:10:27.373373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.653 [2024-07-11 06:10:27.393946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.653 [2024-07-11 06:10:27.394014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.653 [2024-07-11 06:10:27.394035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.653 [2024-07-11 06:10:27.414008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.653 [2024-07-11 06:10:27.414072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.653 [2024-07-11 06:10:27.414112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.653 [2024-07-11 06:10:27.433870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.653 [2024-07-11 06:10:27.433940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.654 [2024-07-11 06:10:27.433961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.654 [2024-07-11 06:10:27.453458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.654 [2024-07-11 06:10:27.453521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.654 [2024-07-11 06:10:27.453544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.654 [2024-07-11 06:10:27.473075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.654 [2024-07-11 06:10:27.473143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.654 [2024-07-11 06:10:27.473163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.654 [2024-07-11 06:10:27.492750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.654 [2024-07-11 06:10:27.492795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.654 [2024-07-11 06:10:27.492832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.654 [2024-07-11 06:10:27.512186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.654 [2024-07-11 06:10:27.512252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.654 [2024-07-11 06:10:27.512271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.654 [2024-07-11 06:10:27.532900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.654 [2024-07-11 06:10:27.532961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.654 [2024-07-11 06:10:27.532984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.654 [2024-07-11 06:10:27.552289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.654 [2024-07-11 06:10:27.552399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.654 [2024-07-11 06:10:27.552421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.654 [2024-07-11 06:10:27.573748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.654 [2024-07-11 06:10:27.573834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.654 [2024-07-11 06:10:27.573860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.913 [2024-07-11 06:10:27.596780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.913 [2024-07-11 06:10:27.596866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.913 [2024-07-11 06:10:27.596890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.913 [2024-07-11 06:10:27.621031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.913 [2024-07-11 06:10:27.621114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.913 [2024-07-11 06:10:27.621137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.913 [2024-07-11 06:10:27.645179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.913 [2024-07-11 06:10:27.645261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.913 [2024-07-11 06:10:27.645283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.913 [2024-07-11 06:10:27.668935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.913 [2024-07-11 06:10:27.669017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.913 [2024-07-11 06:10:27.669040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.913 [2024-07-11 06:10:27.692879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.913 [2024-07-11 06:10:27.692962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.913 [2024-07-11 06:10:27.692984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.913 [2024-07-11 06:10:27.716654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.913 [2024-07-11 06:10:27.716736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.913 [2024-07-11 06:10:27.716759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.913 [2024-07-11 06:10:27.740598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.913 [2024-07-11 06:10:27.740704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.913 [2024-07-11 06:10:27.740728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.913 [2024-07-11 06:10:27.764230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:11.913 [2024-07-11 06:10:27.764316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.913 [2024-07-11 06:10:27.764339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.913 00:24:11.913 Latency(us) 00:24:11.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.913 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:11.913 nvme0n1 : 2.01 12128.77 47.38 0.00 0.00 10545.31 9353.77 37891.72 00:24:11.913 =================================================================================================================== 00:24:11.913 Total : 12128.77 47.38 0.00 0.00 10545.31 9353.77 37891.72 00:24:11.913 0 00:24:11.913 06:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:11.913 06:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:11.913 06:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:11.913 | .driver_specific 00:24:11.913 | .nvme_error 00:24:11.913 | .status_code 00:24:11.913 | .command_transient_transport_error' 00:24:11.913 06:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 95 > 0 )) 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86482 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86482 ']' 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86482 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86482 00:24:12.479 killing process with pid 86482 00:24:12.479 Received shutdown signal, test time was about 2.000000 seconds 00:24:12.479 00:24:12.479 Latency(us) 00:24:12.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.479 =================================================================================================================== 00:24:12.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86482' 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86482 00:24:12.479 06:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86482 00:24:13.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86549 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86549 /var/tmp/bperf.sock 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86549 ']' 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.413 06:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.413 [2024-07-11 06:10:29.261443] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:13.413 [2024-07-11 06:10:29.262105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86549 ] 00:24:13.413 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.413 Zero copy mechanism will not be used. 00:24:13.671 [2024-07-11 06:10:29.438717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.930 [2024-07-11 06:10:29.624901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.930 [2024-07-11 06:10:29.791684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:14.497 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.497 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:14.497 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:14.497 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:14.756 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:14.756 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.756 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.756 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.756 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.756 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.015 nvme0n1 00:24:15.015 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:15.015 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.015 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.015 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.015 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:15.015 06:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:15.301 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:15.301 Zero copy mechanism will not be used. 00:24:15.301 Running I/O for 2 seconds... 00:24:15.301 [2024-07-11 06:10:30.967556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:30.967653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.301 [2024-07-11 06:10:30.967693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.301 [2024-07-11 06:10:30.972913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:30.973029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.301 [2024-07-11 06:10:30.973050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.301 [2024-07-11 06:10:30.978284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:30.978387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.301 [2024-07-11 06:10:30.978408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.301 [2024-07-11 06:10:30.983403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:30.983482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.301 [2024-07-11 06:10:30.983505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.301 [2024-07-11 06:10:30.988532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:30.988598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.301 [2024-07-11 06:10:30.988658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.301 [2024-07-11 06:10:30.993761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:30.993847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.301 [2024-07-11 06:10:30.993868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.301 [2024-07-11 06:10:30.998966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:30.999053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.301 [2024-07-11 06:10:30.999073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.301 [2024-07-11 06:10:31.003942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:31.004021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.301 [2024-07-11 06:10:31.004044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.301 [2024-07-11 06:10:31.008958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:31.009021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.301 [2024-07-11 06:10:31.009059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.301 [2024-07-11 06:10:31.013954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:31.014043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.301 [2024-07-11 06:10:31.014063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.301 [2024-07-11 06:10:31.018927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.301 [2024-07-11 06:10:31.019014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.019034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.023950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.024028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.024051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.029076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.029141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.029164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.034155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.034237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.034257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.039290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.039391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.039412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.044316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.044381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.044406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.049529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.049608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.049631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.054499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.054584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.054605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.059552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.059638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.059670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.064581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.064674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.064698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.069590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.069676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.069700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.074715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.074798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.074818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.079684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.079767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.079787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.084764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.084854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.084878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.089884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.089965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.089989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.095689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.095779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.095804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.101272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.101365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.101388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.106848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.106926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.106948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.112297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.112367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.112393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.118385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.118468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.118509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.123911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.124002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.124024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.129412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.129497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.129517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.134929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.135019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.135076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.140352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.140419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.140443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.145812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.145893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.145917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.151115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.151199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.151219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.156335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.302 [2024-07-11 06:10:31.156410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.302 [2024-07-11 06:10:31.156432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.302 [2024-07-11 06:10:31.161501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.161581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.161604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.166744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.166807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.166861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.171806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.171890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.171910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.176842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.176926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.176946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.181876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.181954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.181977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.186980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.187058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.187080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.191921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.192005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.192026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.197200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.197288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.197309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.202437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.202530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.202562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.207560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.207640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.207675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.212621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.212731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.212752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.303 [2024-07-11 06:10:31.217911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.303 [2024-07-11 06:10:31.218012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.303 [2024-07-11 06:10:31.218033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.223545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.223608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.223647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.229095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.229174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.229212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.234264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.234349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.234370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.239376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.239484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.239506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.244545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.244611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.244666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.249525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.249604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.249631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.254648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.254763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.254785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.259842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.259946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.259966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.265104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.265198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.265222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.270244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.270324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.270347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.275295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.275400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.275421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.280384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.280459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.280480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.285489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.285552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.285592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.290588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.290675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.290700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.295598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.295707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.295728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.300696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.300789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.300809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.305845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.305922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.305945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.310868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.310930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.310968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.315885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.315971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.315992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.320897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.320985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.321005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.325957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.326020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.326044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.331860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.331924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.331962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.337533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.337606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.337631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.343268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.343346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.343369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.349238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.349327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.564 [2024-07-11 06:10:31.349351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.564 [2024-07-11 06:10:31.355293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.564 [2024-07-11 06:10:31.355377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.355402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.361161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.361239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.361262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.366655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.366799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.366820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.371863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.371976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.371998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.377199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.377292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.377313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.382370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.382450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.382473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.387475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.387555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.387577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.392787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.392870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.392890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.397863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.397946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.397966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.402852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.402929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.402951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.408060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.408138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.408160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.413144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.413231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.413252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.418290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.418377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.418397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.423374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.423437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.423475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.428424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.428492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.428517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.433502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.433591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.433613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.438571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.438669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.438706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.443595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.443682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.443707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.448631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.448733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.448757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.453624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.453717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.453737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.458752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.458836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.458856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.463728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.463790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.463831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.468804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.468866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.468905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.473835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.473920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.473941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.565 [2024-07-11 06:10:31.478936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.565 [2024-07-11 06:10:31.479019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.565 [2024-07-11 06:10:31.479057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.484761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.484841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.484880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.490276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.490356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.490380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.495363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.495450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.495471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.500444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.500502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.500524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.505507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.505586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.505609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.510696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.510760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.510787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.515583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.515682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.515703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.520715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.520814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.520835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.525654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.525741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.530713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.530775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.530812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.535691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.535777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.535798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.540773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.540855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.540876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.545731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.545808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.545833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.550856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.550918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.550956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.555815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.555898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.555918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.560908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.560990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.561010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.565897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.565974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.565997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.570918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.570996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.571019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.575813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.575895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.575926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.580936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.826 [2024-07-11 06:10:31.581024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.826 [2024-07-11 06:10:31.581044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.826 [2024-07-11 06:10:31.585958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.586021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.586043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.590971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.591035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.591073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.595859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.595949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.595969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.600956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.601042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.601062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.605895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.605972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.605991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.611584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.611690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.611711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.617235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.617315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.617335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.622292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.622370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.622389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.627334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.627413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.627433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.632540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.632591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.632611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.637733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.637796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.637830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.642800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.642865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.642884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.647775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.647837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.647871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.652857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.652920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.652955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.657800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.657862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.657897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.662886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.662949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.662985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.667852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.667914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.667949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.672997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.673075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.673095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.678016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.678094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.678129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.683009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.683102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.683120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.688061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.688140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.688160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.693136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.693214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.693233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.698153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.698232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.698251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.703118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.703197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.703232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.708204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.708283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.708327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.713254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.713331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.713350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.718353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.718431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.718450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.723464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.723543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.723562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.827 [2024-07-11 06:10:31.728507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.827 [2024-07-11 06:10:31.728574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.827 [2024-07-11 06:10:31.728595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.828 [2024-07-11 06:10:31.733685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.828 [2024-07-11 06:10:31.733747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.828 [2024-07-11 06:10:31.733782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.828 [2024-07-11 06:10:31.738608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.828 [2024-07-11 06:10:31.738697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.828 [2024-07-11 06:10:31.738717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.828 [2024-07-11 06:10:31.744110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:15.828 [2024-07-11 06:10:31.744191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.828 [2024-07-11 06:10:31.744242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.749564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.749643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.749675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.754854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.754917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.754951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.759830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.759892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.759927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.764914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.764976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.765009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.769830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.769907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.769926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.774919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.774980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.775015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.779805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.779882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.779901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.784873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.784935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.784969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.789866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.789929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.789963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.794854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.794946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.794965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.799819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.799881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.799915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.804873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.804936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.804971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.809807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.809884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.809904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.814723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.814786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.814820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.819701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.819763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.819798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.824716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.824778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.824812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.829703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.829779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.829798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.834730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.834807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.834826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.839828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.839890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.839925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.845006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.845069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.845103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.849979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.850041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.850075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.855041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.855121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.855156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.860245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.860333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.860354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.865322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.865401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.097 [2024-07-11 06:10:31.865422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.097 [2024-07-11 06:10:31.870395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.097 [2024-07-11 06:10:31.870458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.870493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.875593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.875682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.875702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.880809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.880870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.880905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.885816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.885894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.885913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.890760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.890823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.890858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.895774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.895838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.895856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.900809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.900870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.900906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.905922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.906001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.906019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.911020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.911099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.911118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.916000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.916078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.916097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.921840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.921909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.921931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.928375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.928428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.928449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.934981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.935048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.935069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.940729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.940800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.940821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.946421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.946503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.946524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.951782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.951863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.951883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.957420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.957504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.957526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.963204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.963284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.963320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.969017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.969082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.969118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.974901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.974969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.974990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.980529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.980582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.980614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.986343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.986412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.986434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.991771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.991835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.991870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:31.997324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:31.997409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:31.997431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:32.002848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:32.002914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:32.002949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.098 [2024-07-11 06:10:32.008584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.098 [2024-07-11 06:10:32.008652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.098 [2024-07-11 06:10:32.008677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.014131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.014186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.014207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.019628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.019705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.019738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.025566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.025622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.025657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.031151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.031206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.031236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.036612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.036704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.036737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.042052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.042119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.042156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.047703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.047769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.047790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.053080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.053163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.053184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.058565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.058648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.058681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.063904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.063968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.064004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.069191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.069256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.069291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.074609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.074703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.074724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.079964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.080029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.080064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.085197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.085262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.085314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.090498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.090564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.090599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.096219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.096328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.096355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.101847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.101912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.101948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.107091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.107156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.107192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.112416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.112485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.112507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.117662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.117750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.117786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.123331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.123398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.123434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.129049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.129132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.129153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.134796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.372 [2024-07-11 06:10:32.134863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.372 [2024-07-11 06:10:32.134900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.372 [2024-07-11 06:10:32.140482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.140535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.140557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.146265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.146363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.146384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.152105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.152171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.152208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.157642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.157736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.157758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.163242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.163307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.163344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.168730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.168783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.168804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.174349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.174415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.174452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.179692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.179757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.179776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.184902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.184967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.185002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.190224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.190306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.190345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.195575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.195669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.195691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.201050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.201115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.201150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.206404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.206470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.206506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.211753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.211820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.211840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.217207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.217271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.217322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.222757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.222834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.222870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.227930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.228011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.228030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.233113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.233192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.233211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.238308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.238387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.238406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.243494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.243575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.243609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.248674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.248761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.248796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.253840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.253919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.253974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.259034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.259113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.259132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.264211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.264299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.264353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.269314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.269394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.269413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.274329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.274398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.274433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.279488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.279551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.279587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.284646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.284747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.373 [2024-07-11 06:10:32.284782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.373 [2024-07-11 06:10:32.290246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.373 [2024-07-11 06:10:32.290328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.374 [2024-07-11 06:10:32.290349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.295811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.295872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.295907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.301568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.301647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.301678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.306920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.306986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.307005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.311998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.312077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.312096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.317073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.317150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.317169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.322161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.322241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.322261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.327322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.327385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.327421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.332342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.332410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.332431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.337409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.337472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.337507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.342551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.342631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.342650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.347698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.347760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.347795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.353131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.353213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.353233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.358554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.358636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.358684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.364145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.364225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.364246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.370047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.370127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.370163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.375776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.375854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.375874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.381279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.381378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.381399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.386960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.387039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.387058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.392346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.392401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.392422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.397657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.397763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.397799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.402815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.402878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.402914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.407765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.407843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.407862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.413012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.413075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.413109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.418047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.633 [2024-07-11 06:10:32.418109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.633 [2024-07-11 06:10:32.418144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.633 [2024-07-11 06:10:32.423101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.423180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.423199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.428052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.428131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.428151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.433277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.433356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.433376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.438307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.438370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.438405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.443299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.443378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.443398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.448784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.448862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.448881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.453818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.453896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.453915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.458857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.458919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.458953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.463840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.463919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.463939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.468902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.468963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.468998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.473915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.473979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.474014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.478896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.478987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.479006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.483940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.484019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.484038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.489356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.489437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.489472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.495153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.495222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.495244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.500425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.500492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.500513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.505712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.505790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.505809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.510816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.510892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.510928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.515857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.515935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.515954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.520908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.520969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.521004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.525884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.525963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.525982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.530899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.530964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.530983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.535837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.535899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.535934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.540906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.540967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.541002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.546106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.546185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.546203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.634 [2024-07-11 06:10:32.551450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.634 [2024-07-11 06:10:32.551531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.634 [2024-07-11 06:10:32.551552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.894 [2024-07-11 06:10:32.557067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.894 [2024-07-11 06:10:32.557130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.557164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.562299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.562360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.562395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.567434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.567513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.567533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.572539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.572592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.572613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.577709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.577771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.577806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.582711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.582772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.582807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.587663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.587740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.587759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.592740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.592832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.592852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.597865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.597943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.597963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.602855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.602933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.602952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.607891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.607954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.607989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.612978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.613042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.613077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.618022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.618101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.618120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.622996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.623074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.623094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.627971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.628035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.628054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.633053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.633131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.633149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.637989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.638067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.638087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.642991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.643069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.643105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.647996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.648074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.648093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.653086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.653164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.653184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.658164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.658243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.658263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.663146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.663224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.663243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.668003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.668082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.668101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.673039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.673117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.673135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.678133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.678212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.678231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.683104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.683182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.683201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.688000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.688080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.688115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.693090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.693169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.693188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.895 [2024-07-11 06:10:32.698131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.895 [2024-07-11 06:10:32.698211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.895 [2024-07-11 06:10:32.698229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.703050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.703128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.703147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.708004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.708081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.708116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.713091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.713154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.713188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.718112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.718174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.718209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.723113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.723176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.723211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.728033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.728096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.728131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.733106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.733184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.733203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.738471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.738568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.738604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.744192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.744293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.744347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.749316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.749380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.749415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.754396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.754474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.754509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.759519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.759598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.759616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.764616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.764769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.764789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.769777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.769837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.769871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.774790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.774851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.774886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.779757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.779818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.779852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.784860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.784921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.784955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.789882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.789944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.789978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.794860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.794938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.794957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.799857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.799919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.799954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.804922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.805000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.805020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:16.896 [2024-07-11 06:10:32.809835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:16.896 [2024-07-11 06:10:32.809897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.896 [2024-07-11 06:10:32.809932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.815344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.815424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.815444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.820942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.821021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.821056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.826023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.826101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.826120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.831051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.831115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.831134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.836095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.836172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.836192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.841393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.841473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.841493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.846723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.846786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.846821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.851714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.851773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.851791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.856768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.856829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.856865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.861804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.861880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.861900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.866797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.866876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.866895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.871844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.871922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.871942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.876960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.877038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.877057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.882061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.882141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.882161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.156 [2024-07-11 06:10:32.887082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.156 [2024-07-11 06:10:32.887162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.156 [2024-07-11 06:10:32.887182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.892127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.892190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.892226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.897295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.897375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.897395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.902403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.902482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.902501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.907492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.907571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.907590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.912601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.912722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.912776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.917625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.917699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.917734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.922647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.922735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.922754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.927721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.927786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.927816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.932788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.932849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.932884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.937829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.937891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.937926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.942769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.942846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.942865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.947813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.947892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.947911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.157 [2024-07-11 06:10:32.952880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:17.157 [2024-07-11 06:10:32.952941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.157 [2024-07-11 06:10:32.952976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.157 00:24:17.157 Latency(us) 00:24:17.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.157 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:17.157 nvme0n1 : 2.00 5925.89 740.74 0.00 0.00 2696.27 2159.71 10664.49 00:24:17.157 =================================================================================================================== 00:24:17.157 Total : 5925.89 740.74 0.00 0.00 2696.27 2159.71 10664.49 00:24:17.157 0 00:24:17.157 06:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:17.157 06:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:17.157 06:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:17.157 06:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:17.157 | .driver_specific 00:24:17.157 | .nvme_error 00:24:17.157 | .status_code 00:24:17.157 | .command_transient_transport_error' 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 382 > 0 )) 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86549 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86549 ']' 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86549 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86549 00:24:17.416 killing process with pid 86549 00:24:17.416 Received shutdown signal, test time was about 2.000000 seconds 00:24:17.416 00:24:17.416 Latency(us) 00:24:17.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.416 =================================================================================================================== 00:24:17.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86549' 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86549 00:24:17.416 06:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86549 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86616 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86616 /var/tmp/bperf.sock 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86616 ']' 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:18.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.352 06:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.610 [2024-07-11 06:10:34.307910] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:18.610 [2024-07-11 06:10:34.308068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86616 ] 00:24:18.610 [2024-07-11 06:10:34.464246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.868 [2024-07-11 06:10:34.641539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.126 [2024-07-11 06:10:34.810085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:19.385 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.385 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:19.385 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:19.385 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:19.644 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:19.644 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.644 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.644 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.644 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.644 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.211 nvme0n1 00:24:20.211 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:20.211 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.211 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.211 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.211 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:20.211 06:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:20.211 Running I/O for 2 seconds... 00:24:20.211 [2024-07-11 06:10:35.986120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:24:20.211 [2024-07-11 06:10:35.989777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.211 [2024-07-11 06:10:35.990012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.211 [2024-07-11 06:10:36.007336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:24:20.211 [2024-07-11 06:10:36.010957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.211 [2024-07-11 06:10:36.011210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:20.211 [2024-07-11 06:10:36.029366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:24:20.211 [2024-07-11 06:10:36.033046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.211 [2024-07-11 06:10:36.033288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:20.211 [2024-07-11 06:10:36.051322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:24:20.211 [2024-07-11 06:10:36.054890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.211 [2024-07-11 06:10:36.055118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:20.211 [2024-07-11 06:10:36.073320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:24:20.211 [2024-07-11 06:10:36.076905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.211 [2024-07-11 06:10:36.077189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:20.211 [2024-07-11 06:10:36.094172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:24:20.211 [2024-07-11 06:10:36.097606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.211 [2024-07-11 06:10:36.097844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:20.211 [2024-07-11 06:10:36.114069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:24:20.211 [2024-07-11 06:10:36.117613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.211 [2024-07-11 06:10:36.117864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.134334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:24:20.470 [2024-07-11 06:10:36.137811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.138026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.153834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:24:20.470 [2024-07-11 06:10:36.156912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.156956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.172342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:24:20.470 [2024-07-11 06:10:36.175258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.175301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.190869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:24:20.470 [2024-07-11 06:10:36.193783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.193826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.211591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:24:20.470 [2024-07-11 06:10:36.215075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.215121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.232762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:24:20.470 [2024-07-11 06:10:36.235832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.235889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.252078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:24:20.470 [2024-07-11 06:10:36.255106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.255183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.271057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:24:20.470 [2024-07-11 06:10:36.274045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.274088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.289609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:24:20.470 [2024-07-11 06:10:36.292473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.292521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.308222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:24:20.470 [2024-07-11 06:10:36.310973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.311028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.326615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:24:20.470 [2024-07-11 06:10:36.329459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.329524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.345314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:24:20.470 [2024-07-11 06:10:36.348396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.348449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.364141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:24:20.470 [2024-07-11 06:10:36.366907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.366958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:20.470 [2024-07-11 06:10:36.382443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:24:20.470 [2024-07-11 06:10:36.385241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.470 [2024-07-11 06:10:36.385307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:20.729 [2024-07-11 06:10:36.402240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:24:20.729 [2024-07-11 06:10:36.404996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.729 [2024-07-11 06:10:36.405061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:20.729 [2024-07-11 06:10:36.420877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:24:20.729 [2024-07-11 06:10:36.423479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.729 [2024-07-11 06:10:36.423524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:20.729 [2024-07-11 06:10:36.439615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:24:20.729 [2024-07-11 06:10:36.442387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.729 [2024-07-11 06:10:36.442434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:20.729 [2024-07-11 06:10:36.460624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:24:20.729 [2024-07-11 06:10:36.463512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.729 [2024-07-11 06:10:36.463578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:20.729 [2024-07-11 06:10:36.481024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:24:20.729 [2024-07-11 06:10:36.484040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.729 [2024-07-11 06:10:36.484096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:20.729 [2024-07-11 06:10:36.500874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:24:20.729 [2024-07-11 06:10:36.503425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.729 [2024-07-11 06:10:36.503468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:20.729 [2024-07-11 06:10:36.519416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:24:20.729 [2024-07-11 06:10:36.521984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.729 [2024-07-11 06:10:36.522058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:20.729 [2024-07-11 06:10:36.537893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:24:20.729 [2024-07-11 06:10:36.540255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.730 [2024-07-11 06:10:36.540340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:20.730 [2024-07-11 06:10:36.556049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:24:20.730 [2024-07-11 06:10:36.558533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.730 [2024-07-11 06:10:36.558594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:20.730 [2024-07-11 06:10:36.574440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:24:20.730 [2024-07-11 06:10:36.577039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.730 [2024-07-11 06:10:36.577105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:20.730 [2024-07-11 06:10:36.592806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:24:20.730 [2024-07-11 06:10:36.595159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.730 [2024-07-11 06:10:36.595224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:20.730 [2024-07-11 06:10:36.611061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:24:20.730 [2024-07-11 06:10:36.613569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.730 [2024-07-11 06:10:36.613635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.730 [2024-07-11 06:10:36.629474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:24:20.730 [2024-07-11 06:10:36.631835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.730 [2024-07-11 06:10:36.631883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:20.730 [2024-07-11 06:10:36.647927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:24:20.989 [2024-07-11 06:10:36.650422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.650503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.666998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:24:20.989 [2024-07-11 06:10:36.669452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.669496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.685713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:24:20.989 [2024-07-11 06:10:36.688358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.688406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.705912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:24:20.989 [2024-07-11 06:10:36.708355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.708412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.726971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:24:20.989 [2024-07-11 06:10:36.729625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.729739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.747796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:24:20.989 [2024-07-11 06:10:36.750174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.750227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.768006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:24:20.989 [2024-07-11 06:10:36.770421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.770491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.787227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:24:20.989 [2024-07-11 06:10:36.789549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.789611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.805787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:24:20.989 [2024-07-11 06:10:36.807893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.807937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.824065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:24:20.989 [2024-07-11 06:10:36.826251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.826293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.842298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:24:20.989 [2024-07-11 06:10:36.844441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.844487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.860583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:24:20.989 [2024-07-11 06:10:36.862732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.862800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.878907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:24:20.989 [2024-07-11 06:10:36.881047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.881112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:20.989 [2024-07-11 06:10:36.897041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:24:20.989 [2024-07-11 06:10:36.899051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:20.989 [2024-07-11 06:10:36.899116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:36.916656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:24:21.248 [2024-07-11 06:10:36.918805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:36.918880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:36.935233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:24:21.248 [2024-07-11 06:10:36.937423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:36.937490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:36.953891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:24:21.248 [2024-07-11 06:10:36.955835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:36.955880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:36.972161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:24:21.248 [2024-07-11 06:10:36.974210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:36.974254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:36.990889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:24:21.248 [2024-07-11 06:10:36.992966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:36.993036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:37.009631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:24:21.248 [2024-07-11 06:10:37.011550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:37.011592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:37.027949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:24:21.248 [2024-07-11 06:10:37.029898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:37.029943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:37.048469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:24:21.248 [2024-07-11 06:10:37.050748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:37.050822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:37.069456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:24:21.248 [2024-07-11 06:10:37.071515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:37.071561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:37.090257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:24:21.248 [2024-07-11 06:10:37.092323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:37.092375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:37.111544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:24:21.248 [2024-07-11 06:10:37.113534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:37.113581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:37.131389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:24:21.248 [2024-07-11 06:10:37.133440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:37.133491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:21.248 [2024-07-11 06:10:37.151289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:24:21.248 [2024-07-11 06:10:37.153257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.248 [2024-07-11 06:10:37.153301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.171539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:24:21.507 [2024-07-11 06:10:37.173500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.173562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.191504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:24:21.507 [2024-07-11 06:10:37.193423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.193490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.211069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:24:21.507 [2024-07-11 06:10:37.212907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.212953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.241987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:24:21.507 [2024-07-11 06:10:37.245565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.245644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.263369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:24:21.507 [2024-07-11 06:10:37.266809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.266883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.283273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:24:21.507 [2024-07-11 06:10:37.286566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.286637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.303249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:24:21.507 [2024-07-11 06:10:37.306663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.306760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.323239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:24:21.507 [2024-07-11 06:10:37.326599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.326711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.343333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:24:21.507 [2024-07-11 06:10:37.346470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.346549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.362853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:24:21.507 [2024-07-11 06:10:37.365908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.365959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.381941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:24:21.507 [2024-07-11 06:10:37.384957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.385037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.400540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:24:21.507 [2024-07-11 06:10:37.403490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.403555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:21.507 [2024-07-11 06:10:37.419298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:24:21.507 [2024-07-11 06:10:37.422319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.507 [2024-07-11 06:10:37.422386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.438742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:24:21.766 [2024-07-11 06:10:37.441836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.442059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.457644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:24:21.766 [2024-07-11 06:10:37.460597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.460688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.478277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:24:21.766 [2024-07-11 06:10:37.481518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.481569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.498944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:24:21.766 [2024-07-11 06:10:37.501987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.502030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.518496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:24:21.766 [2024-07-11 06:10:37.521477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.521521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.537221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:24:21.766 [2024-07-11 06:10:37.539954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.539997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.556014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:24:21.766 [2024-07-11 06:10:37.559010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.559080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.574672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:24:21.766 [2024-07-11 06:10:37.577413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.577457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.593131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:24:21.766 [2024-07-11 06:10:37.595837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.595880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.611637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:24:21.766 [2024-07-11 06:10:37.614382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.614465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.631008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:24:21.766 [2024-07-11 06:10:37.633772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.633823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.649664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:24:21.766 [2024-07-11 06:10:37.652410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.652467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:21.766 [2024-07-11 06:10:37.668395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:24:21.766 [2024-07-11 06:10:37.671088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.766 [2024-07-11 06:10:37.671151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.687890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:24:22.026 [2024-07-11 06:10:37.690770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.690846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.706778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:24:22.026 [2024-07-11 06:10:37.709353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.709418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.725325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:24:22.026 [2024-07-11 06:10:37.727853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.727898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.744442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:24:22.026 [2024-07-11 06:10:37.747044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.747110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.763499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:24:22.026 [2024-07-11 06:10:37.766101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.766160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.782029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:24:22.026 [2024-07-11 06:10:37.784506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.784553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.800273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:24:22.026 [2024-07-11 06:10:37.802741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.802783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.818492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:24:22.026 [2024-07-11 06:10:37.821107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.821149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.837148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:24:22.026 [2024-07-11 06:10:37.839537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.839615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.855586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:24:22.026 [2024-07-11 06:10:37.858105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.858171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.874355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:24:22.026 [2024-07-11 06:10:37.876897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.876969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.892901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:24:22.026 [2024-07-11 06:10:37.895182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.895264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.911144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:24:22.026 [2024-07-11 06:10:37.913512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.913575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:22.026 [2024-07-11 06:10:37.929664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:24:22.026 [2024-07-11 06:10:37.931932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.026 [2024-07-11 06:10:37.931977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:22.285 [2024-07-11 06:10:37.948876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:24:22.285 [2024-07-11 06:10:37.951358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.285 [2024-07-11 06:10:37.951419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:22.285 00:24:22.285 Latency(us) 00:24:22.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.285 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:22.285 nvme0n1 : 2.01 13033.89 50.91 0.00 0.00 9812.04 8638.84 40036.54 00:24:22.285 =================================================================================================================== 00:24:22.285 Total : 13033.89 50.91 0.00 0.00 9812.04 8638.84 40036.54 00:24:22.285 0 00:24:22.285 06:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:22.285 06:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:22.285 06:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:22.285 06:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:22.285 | .driver_specific 00:24:22.285 | .nvme_error 00:24:22.285 | .status_code 00:24:22.285 | .command_transient_transport_error' 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 102 > 0 )) 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86616 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86616 ']' 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86616 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86616 00:24:22.544 killing process with pid 86616 00:24:22.544 Received shutdown signal, test time was about 2.000000 seconds 00:24:22.544 00:24:22.544 Latency(us) 00:24:22.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.544 =================================================================================================================== 00:24:22.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86616' 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86616 00:24:22.544 06:10:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86616 00:24:23.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86677 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86677 /var/tmp/bperf.sock 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 86677 ']' 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.480 06:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.480 [2024-07-11 06:10:39.266902] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:23.480 [2024-07-11 06:10:39.267358] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86677 ] 00:24:23.480 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:23.480 Zero copy mechanism will not be used. 00:24:23.739 [2024-07-11 06:10:39.439878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.739 [2024-07-11 06:10:39.609263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.998 [2024-07-11 06:10:39.777459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:24.257 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.257 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:24.257 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:24.257 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:24.515 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:24.515 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.515 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.515 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.515 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.515 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.774 nvme0n1 00:24:24.774 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:24.774 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.774 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.774 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.774 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:24.774 06:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:25.033 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:25.033 Zero copy mechanism will not be used. 00:24:25.033 Running I/O for 2 seconds... 00:24:25.033 [2024-07-11 06:10:40.791797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.033 [2024-07-11 06:10:40.792219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.033 [2024-07-11 06:10:40.792267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.033 [2024-07-11 06:10:40.799272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.033 [2024-07-11 06:10:40.799637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.033 [2024-07-11 06:10:40.799721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.033 [2024-07-11 06:10:40.806642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.033 [2024-07-11 06:10:40.807052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.033 [2024-07-11 06:10:40.807108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.033 [2024-07-11 06:10:40.814138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.033 [2024-07-11 06:10:40.814565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.033 [2024-07-11 06:10:40.814610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.033 [2024-07-11 06:10:40.821604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.033 [2024-07-11 06:10:40.822026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.033 [2024-07-11 06:10:40.822075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.033 [2024-07-11 06:10:40.829036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.033 [2024-07-11 06:10:40.829393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.033 [2024-07-11 06:10:40.829441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.033 [2024-07-11 06:10:40.836602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.033 [2024-07-11 06:10:40.837074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.033 [2024-07-11 06:10:40.837125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.033 [2024-07-11 06:10:40.843837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.033 [2024-07-11 06:10:40.844240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.033 [2024-07-11 06:10:40.844319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.033 [2024-07-11 06:10:40.851230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.033 [2024-07-11 06:10:40.851582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.851631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.858611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.859070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.859129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.866124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.866523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.866566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.873690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.874104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.874164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.881148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.881558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.881597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.888697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.889063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.889110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.895919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.896310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.896349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.903721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.904157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.904202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.911448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.911909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.911961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.918910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.919307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.919346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.926350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.926785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.926831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.933856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.934248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.934295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.941392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.941809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.941848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.034 [2024-07-11 06:10:40.948726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.034 [2024-07-11 06:10:40.949083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.034 [2024-07-11 06:10:40.949130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:40.955780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:40.956150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:40.956220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:40.962829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:40.963205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:40.963258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:40.970306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:40.970758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:40.970805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:40.977877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:40.978279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:40.978347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:40.985349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:40.985738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:40.985786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:40.992879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:40.993259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:40.993321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.000344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.000735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.000775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.007978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.008395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.008442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.015222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.015614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.015671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.022504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.022898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.022937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.029731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.030181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.030239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.037040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.037465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.037504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.044766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.045162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.045201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.052209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.052582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.052629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.059368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.059759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.059802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.066622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.067050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.067110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.073990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.074399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.074446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.081456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.081878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.081923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.088981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.089345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.089391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.096257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.096636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.096729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.294 [2024-07-11 06:10:41.103602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.294 [2024-07-11 06:10:41.104020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.294 [2024-07-11 06:10:41.104073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.110890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.111273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.111321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.118445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.118847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.118891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.126196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.126617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.126663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.133941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.134352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.134398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.141754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.142144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.142196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.149065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.149510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.149569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.156783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.157235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.157288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.164532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.164920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.164964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.172157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.172533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.172581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.179577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.180013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.180058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.187388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.187798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.187848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.194957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.195347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.195395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.202259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.202683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.202743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.295 [2024-07-11 06:10:41.209626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.295 [2024-07-11 06:10:41.210073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.295 [2024-07-11 06:10:41.210132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.217156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.217534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.217611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.224631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.225072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.225110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.231936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.232351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.232398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.239535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.239955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.239999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.246969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.247358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.247396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.254321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.254745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.254791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.261687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.262102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.262156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.269169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.269561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.269623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.276881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.277259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.277306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.284066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.284499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.284537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.291515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.291950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.292004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.299104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.299531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.299570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.306619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.307079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.307129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.314193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.314570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.314627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.321721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.322137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.555 [2024-07-11 06:10:41.322188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.555 [2024-07-11 06:10:41.329307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.555 [2024-07-11 06:10:41.329708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.329775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.336763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.337225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.337273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.344414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.344818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.344855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.351720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.352112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.352167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.359296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.359722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.359771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.366983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.367392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.367430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.374280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.374725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.374783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.381709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.382226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.382272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.389107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.389489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.389535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.396484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.396884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.396960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.403660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.404090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.404143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.411175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.411579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.411623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.418477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.418962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.419016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.425778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.426229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.426298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.433262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.433649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.433707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.440629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.441068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.441119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.448098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.448536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.448580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.455520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.455950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.456001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.463024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.463421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.463459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.556 [2024-07-11 06:10:41.470277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.556 [2024-07-11 06:10:41.470705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.556 [2024-07-11 06:10:41.470763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.477802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.478175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.478237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.485113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.485523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.485562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.492164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.492560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.492607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.499808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.500245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.500324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.507303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.507730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.507780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.514497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.514943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.515011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.522192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.522587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.522625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.529437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.529857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.529903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.536629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.537137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.537195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.544148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.544569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.544607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.816 [2024-07-11 06:10:41.551834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.816 [2024-07-11 06:10:41.552221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.816 [2024-07-11 06:10:41.552309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.559376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.559801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.559850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.566971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.567388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.567427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.574928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.575370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.575429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.582582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.583016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.583060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.590056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.590482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.590546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.597539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.597962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.598020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.605029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.605424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.605473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.612596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.613022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.613114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.620042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.620435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.620474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.627152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.627547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.627608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.634740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.635142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.635209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.642088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.642546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.642616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.649665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.650115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.650167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.657035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.657398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.657445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.664387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.664807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.664867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.671661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.672043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.672119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.679069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.679450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.679489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.686412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.686849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.686888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.693605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.694031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.694086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.701150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.701587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.701657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.708862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.709220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.709258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.716305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.716680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.716726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.723639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.724085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.724143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.817 [2024-07-11 06:10:41.730744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:25.817 [2024-07-11 06:10:41.731157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.817 [2024-07-11 06:10:41.731203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.738396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.738803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.738863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.745956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.746474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.746517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.753832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.754264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.754333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.761460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.761864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.761902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.769002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.769421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.769459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.776372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.776776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.776838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.783776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.784204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.784258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.791176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.791557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.791606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.798746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.799124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.799163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.806125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.806521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.806560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.813624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.814046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.814110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.821159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.821535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.821587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.828870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.829258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.829297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.836174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.836599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.836672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.843562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.843996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.844034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.077 [2024-07-11 06:10:41.851079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.077 [2024-07-11 06:10:41.851480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.077 [2024-07-11 06:10:41.851541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.858557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.859035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.859086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.866024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.866416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.866466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.873676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.874120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.874180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.881444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.881889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.881955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.888923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.889310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.889349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.896320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.896700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.896780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.903638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.904038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.904082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.911294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.911723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.911794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.918966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.919345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.919391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.926233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.926636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.926716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.933557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.933991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.934082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.941083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.941461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.941500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.948264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.948677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.948715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.955687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.956099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.956152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.963120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.963506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.963545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.970577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.971033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.971094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.978084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.978475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.978523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.985618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.986117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.986168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.078 [2024-07-11 06:10:41.993080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.078 [2024-07-11 06:10:41.993496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.078 [2024-07-11 06:10:41.993544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.000516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.001002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.001049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.008023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.008481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.008520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.015441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.015856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.015912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.023049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.023489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.023528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.030419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.030850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.030890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.037617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.038048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.038093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.045293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.045693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.045747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.052803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.053213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.053268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.060376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.060784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.060824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.067736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.068125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.068164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.075003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.075379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.075419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.082494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.082959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.083004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.090035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.090423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.090472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.097399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.097799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.097839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.104732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.105119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.105173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.111907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.112359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.112398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.119606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.120030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.120075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.127009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.127383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.127420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.134464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.134946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.134988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.141865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.142228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.142282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.148953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.149318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.149357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.156530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.156907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.156952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.164294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.164671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.164710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.171812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.172290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.338 [2024-07-11 06:10:42.172341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.338 [2024-07-11 06:10:42.179754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.338 [2024-07-11 06:10:42.180214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.180253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.339 [2024-07-11 06:10:42.187311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.339 [2024-07-11 06:10:42.187718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.187783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.339 [2024-07-11 06:10:42.194944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.339 [2024-07-11 06:10:42.195334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.195372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.339 [2024-07-11 06:10:42.202201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.339 [2024-07-11 06:10:42.202605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.202672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.339 [2024-07-11 06:10:42.209723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.339 [2024-07-11 06:10:42.210135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.210186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.339 [2024-07-11 06:10:42.217252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.339 [2024-07-11 06:10:42.217676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.217725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.339 [2024-07-11 06:10:42.224731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.339 [2024-07-11 06:10:42.225120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.225159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.339 [2024-07-11 06:10:42.232214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.339 [2024-07-11 06:10:42.232618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.232695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.339 [2024-07-11 06:10:42.239620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.339 [2024-07-11 06:10:42.240005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.240050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.339 [2024-07-11 06:10:42.246887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.339 [2024-07-11 06:10:42.247311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.247350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.339 [2024-07-11 06:10:42.254427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.339 [2024-07-11 06:10:42.254838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.339 [2024-07-11 06:10:42.254881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.598 [2024-07-11 06:10:42.261979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.262402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.262459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.269579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.269990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.270038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.276592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.276977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.277017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.283610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.284020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.284074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.290730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.291109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.291150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.297687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.298057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.298099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.304671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.305048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.305090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.312224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.312601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.312654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.319175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.319546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.319588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.326134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.326502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.326543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.333075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.333451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.333491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.340047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.340454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.340505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.347202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.347609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.347690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.354426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.354844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.354889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.361572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.362027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.362078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.368578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.368999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.369053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.375521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.375903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.375948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.383109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.383507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.383558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.389968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.390345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.390386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.396998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.397368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.397409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.404022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.404402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.404443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.410963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.411333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.411372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.418091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.418468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.418510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.425244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.425617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.425671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.432347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.432772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.432813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.440006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.440450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.440489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.447013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.447412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.447453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.454625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.599 [2024-07-11 06:10:42.455028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.599 [2024-07-11 06:10:42.455069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.599 [2024-07-11 06:10:42.462455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.600 [2024-07-11 06:10:42.462894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.600 [2024-07-11 06:10:42.462938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.600 [2024-07-11 06:10:42.470266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.600 [2024-07-11 06:10:42.470670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.600 [2024-07-11 06:10:42.470719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.600 [2024-07-11 06:10:42.477775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.600 [2024-07-11 06:10:42.478186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.600 [2024-07-11 06:10:42.478230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.600 [2024-07-11 06:10:42.485150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.600 [2024-07-11 06:10:42.485575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.600 [2024-07-11 06:10:42.485614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.600 [2024-07-11 06:10:42.493039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.600 [2024-07-11 06:10:42.493432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.600 [2024-07-11 06:10:42.493503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.600 [2024-07-11 06:10:42.500366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.600 [2024-07-11 06:10:42.500743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.600 [2024-07-11 06:10:42.500812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.600 [2024-07-11 06:10:42.508340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.600 [2024-07-11 06:10:42.508762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.600 [2024-07-11 06:10:42.508817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.600 [2024-07-11 06:10:42.516150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.600 [2024-07-11 06:10:42.516531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.600 [2024-07-11 06:10:42.516571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.859 [2024-07-11 06:10:42.523382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.859 [2024-07-11 06:10:42.523766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.859 [2024-07-11 06:10:42.523807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.859 [2024-07-11 06:10:42.530783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.859 [2024-07-11 06:10:42.531243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.859 [2024-07-11 06:10:42.531282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.859 [2024-07-11 06:10:42.538327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.859 [2024-07-11 06:10:42.538707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.859 [2024-07-11 06:10:42.538776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.859 [2024-07-11 06:10:42.546120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.859 [2024-07-11 06:10:42.546534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.859 [2024-07-11 06:10:42.546589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.859 [2024-07-11 06:10:42.553384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.859 [2024-07-11 06:10:42.553790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.859 [2024-07-11 06:10:42.553830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.859 [2024-07-11 06:10:42.560975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.859 [2024-07-11 06:10:42.561459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.859 [2024-07-11 06:10:42.561498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.859 [2024-07-11 06:10:42.568469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.859 [2024-07-11 06:10:42.568850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.859 [2024-07-11 06:10:42.568901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.859 [2024-07-11 06:10:42.575408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.859 [2024-07-11 06:10:42.575786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.575853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.582627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.583018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.583064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.590239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.590651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.590702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.597564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.597981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.598033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.604913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.605326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.605368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.612377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.612785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.612825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.620008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.620412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.620452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.626793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.627177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.627264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.633726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.634095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.634167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.641619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.642044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.642096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.649525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.649970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.650013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.657333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.657748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.657813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.665135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.665554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.665609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.672708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.673089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.673128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.680196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.680590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.680628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.687783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.688181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.688230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.695329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.695712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.695750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.702570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.702968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.703013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.710211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.710634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.710681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.717694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.718082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.718119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.725338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.725812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.725865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.732635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.733073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.733118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.740154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.740541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.740581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.747514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.747926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.747987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.754983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.755380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.755418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.762473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.762900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.762944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.770141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.770581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.770621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.860 [2024-07-11 06:10:42.777416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:24:26.860 [2024-07-11 06:10:42.777686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.860 [2024-07-11 06:10:42.777735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.118 00:24:27.118 Latency(us) 00:24:27.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.118 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:27.118 nvme0n1 : 2.00 4150.11 518.76 0.00 0.00 3844.39 3157.64 12809.31 00:24:27.118 =================================================================================================================== 00:24:27.118 Total : 4150.11 518.76 0.00 0.00 3844.39 3157.64 12809.31 00:24:27.118 0 00:24:27.118 06:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:27.118 06:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:27.118 06:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:27.118 | .driver_specific 00:24:27.118 | .nvme_error 00:24:27.118 | .status_code 00:24:27.118 | .command_transient_transport_error' 00:24:27.118 06:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 268 > 0 )) 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86677 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86677 ']' 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86677 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86677 00:24:27.376 killing process with pid 86677 00:24:27.376 Received shutdown signal, test time was about 2.000000 seconds 00:24:27.376 00:24:27.376 Latency(us) 00:24:27.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.376 =================================================================================================================== 00:24:27.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86677' 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86677 00:24:27.376 06:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86677 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 86444 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 86444 ']' 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 86444 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86444 00:24:28.751 killing process with pid 86444 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86444' 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 86444 00:24:28.751 06:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 86444 00:24:29.711 00:24:29.711 real 0m22.873s 00:24:29.711 user 0m43.423s 00:24:29.711 sys 0m4.637s 00:24:29.711 ************************************ 00:24:29.711 END TEST nvmf_digest_error 00:24:29.711 ************************************ 00:24:29.711 06:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:29.711 06:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.711 06:10:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:29.711 06:10:45 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:29.711 06:10:45 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:29.711 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.711 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.969 rmmod nvme_tcp 00:24:29.969 rmmod nvme_fabrics 00:24:29.969 rmmod nvme_keyring 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:29.969 Process with pid 86444 is not found 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 86444 ']' 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 86444 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 86444 ']' 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 86444 00:24:29.969 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (86444) - No such process 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 86444 is not found' 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:29.969 00:24:29.969 real 0m48.469s 00:24:29.969 user 1m31.558s 00:24:29.969 sys 0m9.778s 00:24:29.969 ************************************ 00:24:29.969 END TEST nvmf_digest 00:24:29.969 ************************************ 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:29.969 06:10:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:29.969 06:10:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:29.969 06:10:45 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:29.969 06:10:45 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:24:29.969 06:10:45 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:29.969 06:10:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:29.969 06:10:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.969 06:10:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:29.969 ************************************ 00:24:29.969 START TEST nvmf_host_multipath 00:24:29.969 ************************************ 00:24:29.969 06:10:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:30.227 * Looking for test storage... 00:24:30.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.227 06:10:45 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:30.228 Cannot find device "nvmf_tgt_br" 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:30.228 Cannot find device "nvmf_tgt_br2" 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:30.228 Cannot find device "nvmf_tgt_br" 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:24:30.228 06:10:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:30.228 Cannot find device "nvmf_tgt_br2" 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:30.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:30.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:30.228 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:30.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:24:30.487 00:24:30.487 --- 10.0.0.2 ping statistics --- 00:24:30.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.487 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:30.487 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:30.487 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:24:30.487 00:24:30.487 --- 10.0.0.3 ping statistics --- 00:24:30.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.487 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:30.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:30.487 00:24:30.487 --- 10.0.0.1 ping statistics --- 00:24:30.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.487 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=86963 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 86963 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 86963 ']' 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:30.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:30.487 06:10:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:30.745 [2024-07-11 06:10:46.455241] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:24:30.745 [2024-07-11 06:10:46.455431] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.745 [2024-07-11 06:10:46.635068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:31.005 [2024-07-11 06:10:46.874606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.005 [2024-07-11 06:10:46.875028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.005 [2024-07-11 06:10:46.875062] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.005 [2024-07-11 06:10:46.875077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.005 [2024-07-11 06:10:46.875089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.005 [2024-07-11 06:10:46.875273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.005 [2024-07-11 06:10:46.875286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.264 [2024-07-11 06:10:47.079217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:31.522 06:10:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.522 06:10:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:24:31.522 06:10:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.522 06:10:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:31.522 06:10:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:31.522 06:10:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.522 06:10:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=86963 00:24:31.522 06:10:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:31.781 [2024-07-11 06:10:47.672231] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.781 06:10:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:32.348 Malloc0 00:24:32.348 06:10:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:32.606 06:10:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.866 06:10:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.125 [2024-07-11 06:10:48.848989] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.125 06:10:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.384 [2024-07-11 06:10:49.093119] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:33.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.384 06:10:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=87019 00:24:33.384 06:10:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:33.385 06:10:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:33.385 06:10:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 87019 /var/tmp/bdevperf.sock 00:24:33.385 06:10:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 87019 ']' 00:24:33.385 06:10:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.385 06:10:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.385 06:10:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.385 06:10:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.385 06:10:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:34.321 06:10:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.321 06:10:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:24:34.321 06:10:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:34.580 06:10:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:35.147 Nvme0n1 00:24:35.147 06:10:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:35.405 Nvme0n1 00:24:35.405 06:10:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:24:35.405 06:10:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:36.342 06:10:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:36.342 06:10:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:36.601 06:10:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:36.861 06:10:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:36.861 06:10:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87066 00:24:36.861 06:10:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86963 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:36.861 06:10:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:43.427 Attaching 4 probes... 00:24:43.427 @path[10.0.0.2, 4421]: 13006 00:24:43.427 @path[10.0.0.2, 4421]: 13210 00:24:43.427 @path[10.0.0.2, 4421]: 13343 00:24:43.427 @path[10.0.0.2, 4421]: 13257 00:24:43.427 @path[10.0.0.2, 4421]: 13386 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87066 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:24:43.427 06:10:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:43.427 06:10:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:43.686 06:10:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:24:43.686 06:10:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86963 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:43.686 06:10:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87177 00:24:43.686 06:10:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:50.269 Attaching 4 probes... 00:24:50.269 @path[10.0.0.2, 4420]: 13073 00:24:50.269 @path[10.0.0.2, 4420]: 13278 00:24:50.269 @path[10.0.0.2, 4420]: 13425 00:24:50.269 @path[10.0.0.2, 4420]: 13478 00:24:50.269 @path[10.0.0.2, 4420]: 13512 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87177 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:50.269 06:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:50.269 06:11:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:50.527 06:11:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:50.527 06:11:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86963 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:50.527 06:11:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87291 00:24:50.527 06:11:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:57.091 Attaching 4 probes... 00:24:57.091 @path[10.0.0.2, 4421]: 10218 00:24:57.091 @path[10.0.0.2, 4421]: 13043 00:24:57.091 @path[10.0.0.2, 4421]: 13066 00:24:57.091 @path[10.0.0.2, 4421]: 12992 00:24:57.091 @path[10.0.0.2, 4421]: 13046 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87291 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:57.091 06:11:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:57.350 06:11:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:57.350 06:11:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86963 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:57.350 06:11:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87404 00:24:57.350 06:11:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:03.913 Attaching 4 probes... 00:25:03.913 00:25:03.913 00:25:03.913 00:25:03.913 00:25:03.913 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87404 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:03.913 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:04.181 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:04.181 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87511 00:25:04.181 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86963 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:04.181 06:11:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:10.786 06:11:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:10.786 06:11:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:10.786 Attaching 4 probes... 00:25:10.786 @path[10.0.0.2, 4421]: 12552 00:25:10.786 @path[10.0.0.2, 4421]: 12671 00:25:10.786 @path[10.0.0.2, 4421]: 12690 00:25:10.786 @path[10.0.0.2, 4421]: 12751 00:25:10.786 @path[10.0.0.2, 4421]: 12805 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87511 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:10.786 06:11:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:25:11.723 06:11:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:25:11.724 06:11:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87635 00:25:11.724 06:11:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86963 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:11.724 06:11:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:18.286 Attaching 4 probes... 00:25:18.286 @path[10.0.0.2, 4420]: 12751 00:25:18.286 @path[10.0.0.2, 4420]: 12848 00:25:18.286 @path[10.0.0.2, 4420]: 12836 00:25:18.286 @path[10.0.0.2, 4420]: 12877 00:25:18.286 @path[10.0.0.2, 4420]: 12786 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87635 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:18.286 06:11:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:18.286 [2024-07-11 06:11:34.096545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:18.286 06:11:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:18.545 06:11:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:25:25.143 06:11:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:25:25.143 06:11:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87805 00:25:25.143 06:11:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86963 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:25.143 06:11:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:31.703 Attaching 4 probes... 00:25:31.703 @path[10.0.0.2, 4421]: 12722 00:25:31.703 @path[10.0.0.2, 4421]: 12928 00:25:31.703 @path[10.0.0.2, 4421]: 12956 00:25:31.703 @path[10.0.0.2, 4421]: 12837 00:25:31.703 @path[10.0.0.2, 4421]: 12900 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87805 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 87019 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 87019 ']' 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 87019 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87019 00:25:31.703 killing process with pid 87019 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87019' 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 87019 00:25:31.703 06:11:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 87019 00:25:31.703 Connection closed with partial response: 00:25:31.703 00:25:31.703 00:25:31.977 06:11:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 87019 00:25:31.977 06:11:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:31.977 [2024-07-11 06:10:49.218889] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:31.977 [2024-07-11 06:10:49.219085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87019 ] 00:25:31.977 [2024-07-11 06:10:49.392445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.977 [2024-07-11 06:10:49.592615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.977 [2024-07-11 06:10:49.786660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:31.977 Running I/O for 90 seconds... 00:25:31.977 [2024-07-11 06:10:59.479125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.479971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.479992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.480044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.480094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-07-11 06:10:59.480159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-07-11 06:10:59.480209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-07-11 06:10:59.480306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-07-11 06:10:59.480359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-07-11 06:10:59.480409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-07-11 06:10:59.480459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-07-11 06:10:59.480509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-07-11 06:10:59.480574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.480609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.480632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.481013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.481049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.481084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.481106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.481137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.481159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.481189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.481211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.481241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.481263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.481292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.481314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.481344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.481366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.481396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.977 [2024-07-11 06:10:59.481418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.977 [2024-07-11 06:10:59.481448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.977 [2024-07-11 06:10:59.481469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.481499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.481522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.481552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.481574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.481618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.481641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.481687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.481712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.481743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.481765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.481794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.481816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.481846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.481868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.481898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.481919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.481949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.481970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.482023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.482088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.482138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.482224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.482280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.482347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.978 [2024-07-11 06:10:59.482398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.978 [2024-07-11 06:10:59.482450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.978 [2024-07-11 06:10:59.482504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.978 [2024-07-11 06:10:59.482556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.978 [2024-07-11 06:10:59.482608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.978 [2024-07-11 06:10:59.482673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.978 [2024-07-11 06:10:59.482760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.978 [2024-07-11 06:10:59.482812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.482864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.482926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.482966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.482989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.978 [2024-07-11 06:10:59.483706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.978 [2024-07-11 06:10:59.483769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.978 [2024-07-11 06:10:59.483823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.978 [2024-07-11 06:10:59.483853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.483875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.483906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.483929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.483959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.483981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.484968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.484990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.485057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.485125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.485177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.485928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.485950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.487938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.487980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.488062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.979 [2024-07-11 06:10:59.488132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.488205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.488292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.488358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.488417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.488471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.488523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.488576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.979 [2024-07-11 06:10:59.488675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.979 [2024-07-11 06:10:59.488710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:10:59.488733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:10:59.488763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:10:59.488786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:10:59.488816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:10:59.488838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:10:59.488876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:10:59.488898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:10:59.488929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:10:59.488964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:10:59.488997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:10:59.489019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:10:59.489049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:10:59.489086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.037535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.037609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.037727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.037763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.037799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.037836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.037864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.037901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.037932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.037954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.037984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.038585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.038636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.038687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.038801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.038870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.038923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.038952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.038974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.039041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.039093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.039145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.039197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.039250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.039316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.039368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.039418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.039483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.980 [2024-07-11 06:11:06.039535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.039595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.039648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.039726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.980 [2024-07-11 06:11:06.039761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.980 [2024-07-11 06:11:06.039784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.039814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.039835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.039865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.039886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.039930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.039951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.039980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.040001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.981 [2024-07-11 06:11:06.040964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.040999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.981 [2024-07-11 06:11:06.041743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.981 [2024-07-11 06:11:06.041766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.041796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.041817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.041859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.041882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.041911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.041932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.041962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.041983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.042779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.042864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.042916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.042950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.042973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.982 [2024-07-11 06:11:06.043693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.043764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.043813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.043864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.043932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.043972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.043998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.044028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.044050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.044079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.044100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.982 [2024-07-11 06:11:06.044129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.982 [2024-07-11 06:11:06.044151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.044180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:06.044202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.044231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:06.044264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.044296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:06.044335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.044366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:06.044389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.044418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:06.044440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.044469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:06.044491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.044521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:06.044543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.045503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:06.045539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.045605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:06.045632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.045673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:06.045712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.045758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:06.045782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.045822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:06.045845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.045885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:06.045911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.045967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:06.045988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.046028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:06.046067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:06.046128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:06.046156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.139526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.139611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.139726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.139776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.139811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.139835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.139866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.139888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.139919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.139968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.983 [2024-07-11 06:11:13.140665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:13.140776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:13.140835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:13.140889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:13.140943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.140973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:13.140996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.141027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:13.141049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.141093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:13.141115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.141144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:13.141166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.141195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:13.141216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.983 [2024-07-11 06:11:13.141245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.983 [2024-07-11 06:11:13.141267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.141335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.141388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.141440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.141505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.141558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.141625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.141712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.141768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.141835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.141904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.141957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.141987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.142010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.142062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.142131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.142946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.142977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.143014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.143081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.984 [2024-07-11 06:11:13.143135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:31.984 [2024-07-11 06:11:13.143801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.984 [2024-07-11 06:11:13.143824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.143884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.985 [2024-07-11 06:11:13.143905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.143933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.985 [2024-07-11 06:11:13.143954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.143998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.985 [2024-07-11 06:11:13.144035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.985 [2024-07-11 06:11:13.144088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.985 [2024-07-11 06:11:13.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.985 [2024-07-11 06:11:13.144193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.985 [2024-07-11 06:11:13.144255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.985 [2024-07-11 06:11:13.144311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.985 [2024-07-11 06:11:13.144364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.985 [2024-07-11 06:11:13.144416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.985 [2024-07-11 06:11:13.144482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.985 [2024-07-11 06:11:13.144535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.985 [2024-07-11 06:11:13.144596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.985 [2024-07-11 06:11:13.144663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.985 [2024-07-11 06:11:13.144718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:31.985 [2024-07-11 06:11:13.144748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.144771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.144816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.144837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.144881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.144902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.144930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.144951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.144995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.145032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.145084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.145136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.145200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.145253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.145305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.145356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.145438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.986 [2024-07-11 06:11:13.145486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.145552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.145618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.145670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.145722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.145789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.145844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.145905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.145938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.145960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.146019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.146072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.146100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.146137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.146183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.146205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.146235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.146258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.146288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.986 [2024-07-11 06:11:13.146311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:31.986 [2024-07-11 06:11:13.146347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.987 [2024-07-11 06:11:13.146370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.146400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.987 [2024-07-11 06:11:13.146423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.147454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.987 [2024-07-11 06:11:13.147495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.147547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.147572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.147627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.147650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.147688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.147774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.147820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.147845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.147886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.147909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.147948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.147971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.148012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.148035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.148098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.148125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.148180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.148203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.148253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.148295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.148335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.148358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.148397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.148420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.148459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.148482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.148523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.148546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.148585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.148608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:13.148675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:13.148701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.544721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.544804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.544894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.544924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.544974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.544996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.987 [2024-07-11 06:11:26.545743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.987 [2024-07-11 06:11:26.545810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.987 [2024-07-11 06:11:26.545874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.987 [2024-07-11 06:11:26.545923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.545967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.987 [2024-07-11 06:11:26.546003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.546032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.987 [2024-07-11 06:11:26.546054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.546083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.987 [2024-07-11 06:11:26.546105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.546134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.987 [2024-07-11 06:11:26.546156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:31.987 [2024-07-11 06:11:26.546198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.546222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.546967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.546988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.547007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.547048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.547777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.547817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.547857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.547897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.547937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.547958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.547978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.548043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.548067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.548088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.548119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.548140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.988 [2024-07-11 06:11:26.548159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.988 [2024-07-11 06:11:26.548179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.988 [2024-07-11 06:11:26.548199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.548941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.548962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.548996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.989 [2024-07-11 06:11:26.549653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.549694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.549749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.549798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.549841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.549881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.549921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.549961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.549982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.550001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.550022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.550041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.550062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.550081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.550102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.550121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.989 [2024-07-11 06:11:26.550142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.989 [2024-07-11 06:11:26.550162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.990 [2024-07-11 06:11:26.550203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.990 [2024-07-11 06:11:26.550243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.990 [2024-07-11 06:11:26.550283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:25:31.990 [2024-07-11 06:11:26.550336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.550360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.550377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127832 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.550397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.550433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.550464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128288 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.550483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.550515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.550531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128296 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.550549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.550579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.550594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128304 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.550612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.550660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.550677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128312 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.550695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.550727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.550742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128320 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.550760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.550791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.550806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128328 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.550823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.550854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.550879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128336 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.550898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.550925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.550942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.550957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128344 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.550980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.551000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.551014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.551029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128352 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.551047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.551064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.551077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.551092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128360 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.551110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.551127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.551141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.551155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128368 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.551173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.551190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.551204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.551219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128376 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.551236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.551254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.551268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.551283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128384 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.551300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.551317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.551331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.551346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128392 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.551363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.551381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.551403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.551419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128400 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.551437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.551455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.990 [2024-07-11 06:11:26.551468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.990 [2024-07-11 06:11:26.551483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128408 len:8 PRP1 0x0 PRP2 0x0 00:25:31.990 [2024-07-11 06:11:26.551503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.551783] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:25:31.990 [2024-07-11 06:11:26.551939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.990 [2024-07-11 06:11:26.551974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.551997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.990 [2024-07-11 06:11:26.552017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.990 [2024-07-11 06:11:26.552036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.991 [2024-07-11 06:11:26.552054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.991 [2024-07-11 06:11:26.552073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.991 [2024-07-11 06:11:26.552091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.991 [2024-07-11 06:11:26.552112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.991 [2024-07-11 06:11:26.552132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.991 [2024-07-11 06:11:26.552160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:25:31.991 [2024-07-11 06:11:26.553598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.991 [2024-07-11 06:11:26.553686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:25:31.991 [2024-07-11 06:11:26.554191] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.991 [2024-07-11 06:11:26.554249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4421 00:25:31.991 [2024-07-11 06:11:26.554273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:25:31.991 [2024-07-11 06:11:26.554380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:25:31.991 [2024-07-11 06:11:26.554433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.991 [2024-07-11 06:11:26.554458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.991 [2024-07-11 06:11:26.554502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.991 [2024-07-11 06:11:26.554559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.991 [2024-07-11 06:11:26.554587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.991 [2024-07-11 06:11:36.639951] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:31.991 Received shutdown signal, test time was about 55.477657 seconds 00:25:31.991 00:25:31.991 Latency(us) 00:25:31.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.991 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:31.991 Verification LBA range: start 0x0 length 0x4000 00:25:31.991 Nvme0n1 : 55.48 5585.42 21.82 0.00 0.00 22884.07 1690.53 7046430.72 00:25:31.991 =================================================================================================================== 00:25:31.991 Total : 5585.42 21.82 0.00 0.00 22884.07 1690.53 7046430.72 00:25:31.991 06:11:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.250 06:11:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:25:32.250 06:11:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:32.250 06:11:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:25:32.250 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.250 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:25:32.250 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:32.250 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:25:32.250 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.250 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:32.250 rmmod nvme_tcp 00:25:32.509 rmmod nvme_fabrics 00:25:32.509 rmmod nvme_keyring 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 86963 ']' 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 86963 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 86963 ']' 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 86963 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86963 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:32.509 killing process with pid 86963 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86963' 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 86963 00:25:32.509 06:11:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 86963 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:33.887 ************************************ 00:25:33.887 END TEST nvmf_host_multipath 00:25:33.887 ************************************ 00:25:33.887 00:25:33.887 real 1m3.724s 00:25:33.887 user 2m56.902s 00:25:33.887 sys 0m16.880s 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:33.887 06:11:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:33.887 06:11:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:33.887 06:11:49 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:33.887 06:11:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:33.887 06:11:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.887 06:11:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.887 ************************************ 00:25:33.887 START TEST nvmf_timeout 00:25:33.887 ************************************ 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:33.887 * Looking for test storage... 00:25:33.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:33.887 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:33.888 Cannot find device "nvmf_tgt_br" 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:33.888 Cannot find device "nvmf_tgt_br2" 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:33.888 Cannot find device "nvmf_tgt_br" 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:33.888 Cannot find device "nvmf_tgt_br2" 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:25:33.888 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:34.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:34.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:34.146 06:11:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:34.146 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:34.146 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:34.146 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:34.146 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:34.146 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:34.146 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:34.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:25:34.405 00:25:34.405 --- 10.0.0.2 ping statistics --- 00:25:34.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.405 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:25:34.405 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:34.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:34.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:34.405 00:25:34.405 --- 10.0.0.3 ping statistics --- 00:25:34.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.405 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:34.405 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:34.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:34.405 00:25:34.405 --- 10.0.0.1 ping statistics --- 00:25:34.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.405 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=88138 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 88138 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88138 ']' 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.406 06:11:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.406 [2024-07-11 06:11:50.219259] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:34.406 [2024-07-11 06:11:50.219439] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.665 [2024-07-11 06:11:50.390851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:34.665 [2024-07-11 06:11:50.551886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.665 [2024-07-11 06:11:50.551948] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.665 [2024-07-11 06:11:50.551966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.665 [2024-07-11 06:11:50.551980] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.665 [2024-07-11 06:11:50.551991] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.665 [2024-07-11 06:11:50.552672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.665 [2024-07-11 06:11:50.552684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.924 [2024-07-11 06:11:50.715118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:35.491 06:11:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:35.491 06:11:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:35.491 06:11:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:35.491 06:11:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:35.491 06:11:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:35.491 06:11:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.491 06:11:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:35.491 06:11:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:35.750 [2024-07-11 06:11:51.435857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.750 06:11:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:36.010 Malloc0 00:25:36.010 06:11:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:36.269 06:11:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:36.269 06:11:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.529 [2024-07-11 06:11:52.368021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.529 06:11:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=88187 00:25:36.529 06:11:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:36.529 06:11:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 88187 /var/tmp/bdevperf.sock 00:25:36.529 06:11:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88187 ']' 00:25:36.529 06:11:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:36.529 06:11:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:36.529 06:11:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:36.529 06:11:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.529 06:11:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:36.788 [2024-07-11 06:11:52.491082] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:36.788 [2024-07-11 06:11:52.491262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88187 ] 00:25:36.788 [2024-07-11 06:11:52.663775] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.046 [2024-07-11 06:11:52.874887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.305 [2024-07-11 06:11:53.034755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:37.565 06:11:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.565 06:11:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:37.565 06:11:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:37.843 06:11:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:38.105 NVMe0n1 00:25:38.105 06:11:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=88211 00:25:38.105 06:11:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:38.105 06:11:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:25:38.105 Running I/O for 10 seconds... 00:25:39.051 06:11:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.311 [2024-07-11 06:11:55.019064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.311 [2024-07-11 06:11:55.019797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.311 [2024-07-11 06:11:55.019812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.019827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.019843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.019858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.019873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.019888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.019905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.019937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.019968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.020984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.020999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.312 [2024-07-11 06:11:55.021708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.312 [2024-07-11 06:11:55.021726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.021742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.021757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.021793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.021812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.021828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.021844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.021860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.021875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.021890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.021908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.021923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.021938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.021953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.021968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.021984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.022001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.022016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.022032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.022047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.022546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.022699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.022724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.022742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.022773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.022789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.022819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.022834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.022849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.022864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.022880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.022895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.022911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.022927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.022942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.022973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.313 [2024-07-11 06:11:55.023928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.023960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.023976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.024007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.024022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.024043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.024059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.024074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.024089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.024103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.024118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.024133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.024148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.024163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.024178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.024241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.024259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.024276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.024292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.024308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.313 [2024-07-11 06:11:55.024325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.313 [2024-07-11 06:11:55.024341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.314 [2024-07-11 06:11:55.024565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.314 [2024-07-11 06:11:55.024610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.314 [2024-07-11 06:11:55.024880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.024984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.024998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.025013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.025027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.025042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.025057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.025071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.314 [2024-07-11 06:11:55.025088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.025101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:25:39.314 [2024-07-11 06:11:55.025122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:39.314 [2024-07-11 06:11:55.025134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:39.314 [2024-07-11 06:11:55.025152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55008 len:8 PRP1 0x0 PRP2 0x0 00:25:39.314 [2024-07-11 06:11:55.025165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.314 [2024-07-11 06:11:55.025387] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:25:39.314 [2024-07-11 06:11:55.025638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:39.314 [2024-07-11 06:11:55.025772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:39.314 [2024-07-11 06:11:55.025912] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.314 [2024-07-11 06:11:55.025942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:25:39.314 [2024-07-11 06:11:55.025962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:25:39.314 [2024-07-11 06:11:55.025990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:39.314 [2024-07-11 06:11:55.026031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:39.314 [2024-07-11 06:11:55.026045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:39.314 [2024-07-11 06:11:55.026063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:39.314 [2024-07-11 06:11:55.026092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:39.314 [2024-07-11 06:11:55.026111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:39.314 06:11:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:25:41.216 [2024-07-11 06:11:57.026300] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.216 [2024-07-11 06:11:57.026901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:25:41.216 [2024-07-11 06:11:57.027370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:25:41.216 [2024-07-11 06:11:57.027793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:41.216 [2024-07-11 06:11:57.028271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.216 [2024-07-11 06:11:57.028743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.216 [2024-07-11 06:11:57.029159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.216 [2024-07-11 06:11:57.029424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.216 [2024-07-11 06:11:57.029675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.216 06:11:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:25:41.216 06:11:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:41.216 06:11:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:41.475 06:11:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:25:41.475 06:11:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:25:41.475 06:11:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:41.475 06:11:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:41.734 06:11:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:25:41.734 06:11:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:25:43.110 [2024-07-11 06:11:59.030289] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.110 [2024-07-11 06:11:59.030382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:25:43.110 [2024-07-11 06:11:59.030429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:25:43.110 [2024-07-11 06:11:59.030471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:43.110 [2024-07-11 06:11:59.030502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.110 [2024-07-11 06:11:59.030517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.110 [2024-07-11 06:11:59.030534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.110 [2024-07-11 06:11:59.030587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.110 [2024-07-11 06:11:59.030608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.660 [2024-07-11 06:12:01.030714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.660 [2024-07-11 06:12:01.030792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.660 [2024-07-11 06:12:01.030814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.660 [2024-07-11 06:12:01.030838] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:45.660 [2024-07-11 06:12:01.030892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.226 00:25:46.226 Latency(us) 00:25:46.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.226 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:46.226 Verification LBA range: start 0x0 length 0x4000 00:25:46.226 NVMe0n1 : 8.12 843.34 3.29 15.77 0.00 148702.34 4200.26 7015926.69 00:25:46.226 =================================================================================================================== 00:25:46.226 Total : 843.34 3.29 15.77 0.00 148702.34 4200.26 7015926.69 00:25:46.226 0 00:25:46.793 06:12:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:25:46.793 06:12:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.793 06:12:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:47.051 06:12:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:25:47.051 06:12:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:25:47.051 06:12:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:47.051 06:12:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 88211 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 88187 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88187 ']' 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88187 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88187 00:25:47.310 killing process with pid 88187 00:25:47.310 Received shutdown signal, test time was about 9.263292 seconds 00:25:47.310 00:25:47.310 Latency(us) 00:25:47.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.310 =================================================================================================================== 00:25:47.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88187' 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88187 00:25:47.310 06:12:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88187 00:25:48.684 06:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.943 [2024-07-11 06:12:04.651681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:48.943 06:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=88336 00:25:48.943 06:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:48.943 06:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 88336 /var/tmp/bdevperf.sock 00:25:48.943 06:12:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88336 ']' 00:25:48.943 06:12:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:48.943 06:12:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.943 06:12:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:48.943 06:12:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.943 06:12:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.943 [2024-07-11 06:12:04.766319] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:25:48.943 [2024-07-11 06:12:04.766775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88336 ] 00:25:49.202 [2024-07-11 06:12:04.933615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.460 [2024-07-11 06:12:05.138005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.460 [2024-07-11 06:12:05.338763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:50.027 06:12:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.027 06:12:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:50.027 06:12:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:50.285 06:12:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:25:50.543 NVMe0n1 00:25:50.543 06:12:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=88360 00:25:50.543 06:12:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:50.543 06:12:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:25:50.543 Running I/O for 10 seconds... 00:25:51.478 06:12:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.739 [2024-07-11 06:12:07.604562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.739 [2024-07-11 06:12:07.604715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.604743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.739 [2024-07-11 06:12:07.604761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.604777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.739 [2024-07-11 06:12:07.604793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.604809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.739 [2024-07-11 06:12:07.604828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.604842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:25:51.739 [2024-07-11 06:12:07.605395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.739 [2024-07-11 06:12:07.605437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.605982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.605997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.739 [2024-07-11 06:12:07.606438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.739 [2024-07-11 06:12:07.606457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.606976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.606991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.607972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.607987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.608005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.608020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.608038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.608053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.608071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.740 [2024-07-11 06:12:07.608086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.740 [2024-07-11 06:12:07.608104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.608611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.608630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.609675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.610315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.610695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.611172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.611690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.612191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.612575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.613144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.613509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.614071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.614524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.615011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.615383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.615950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.616422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.616945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.617310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.617885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.617919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.617944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.617960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.617979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.617993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.741 [2024-07-11 06:12:07.618419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.741 [2024-07-11 06:12:07.618453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.741 [2024-07-11 06:12:07.618486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.741 [2024-07-11 06:12:07.618519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.741 [2024-07-11 06:12:07.618552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.741 [2024-07-11 06:12:07.618571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.742 [2024-07-11 06:12:07.618947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.742 [2024-07-11 06:12:07.618980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.618997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:25:51.742 [2024-07-11 06:12:07.619019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.742 [2024-07-11 06:12:07.619034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.742 [2024-07-11 06:12:07.619048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50328 len:8 PRP1 0x0 PRP2 0x0 00:25:51.742 [2024-07-11 06:12:07.619065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.742 [2024-07-11 06:12:07.619321] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:25:51.742 [2024-07-11 06:12:07.619408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:51.742 [2024-07-11 06:12:07.619681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.742 [2024-07-11 06:12:07.619816] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.742 [2024-07-11 06:12:07.619856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:25:51.742 [2024-07-11 06:12:07.619876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:25:51.742 [2024-07-11 06:12:07.619909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:51.742 [2024-07-11 06:12:07.619936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.742 [2024-07-11 06:12:07.619956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.742 [2024-07-11 06:12:07.619974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.742 [2024-07-11 06:12:07.620009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.742 [2024-07-11 06:12:07.620028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.742 06:12:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:25:53.118 [2024-07-11 06:12:08.620234] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.118 [2024-07-11 06:12:08.620695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:25:53.118 [2024-07-11 06:12:08.621156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:25:53.118 [2024-07-11 06:12:08.621613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:53.118 [2024-07-11 06:12:08.622118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.118 [2024-07-11 06:12:08.622547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.118 [2024-07-11 06:12:08.622977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.118 [2024-07-11 06:12:08.623251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.118 [2024-07-11 06:12:08.623488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.118 06:12:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.118 [2024-07-11 06:12:08.880512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.118 06:12:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 88360 00:25:53.728 [2024-07-11 06:12:09.636337] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:01.845 00:26:01.845 Latency(us) 00:26:01.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.845 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:01.845 Verification LBA range: start 0x0 length 0x4000 00:26:01.845 NVMe0n1 : 10.01 4663.55 18.22 0.00 0.00 27392.30 1891.61 3035150.89 00:26:01.845 =================================================================================================================== 00:26:01.845 Total : 4663.55 18.22 0.00 0.00 27392.30 1891.61 3035150.89 00:26:01.845 0 00:26:01.845 06:12:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=88467 00:26:01.845 06:12:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:01.845 06:12:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:01.845 Running I/O for 10 seconds... 00:26:01.845 06:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.845 [2024-07-11 06:12:17.716920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.716985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.717872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.717886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.718324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.718360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.718382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.718397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.718414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.718429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.718445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.718459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.718475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.718489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.718756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.718789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.718873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.718892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.718908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.718924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.719164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.719193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.719358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.719479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.719502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.719518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.719534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.719549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.719565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.719579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.719595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.719923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.720070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.720097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.720376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.720398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.720416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.720430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.720447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.720461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.720477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.720625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.720970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.721002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.721023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.721038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.721055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.721069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.721085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.721099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.721114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.845 [2024-07-11 06:12:17.721493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.845 [2024-07-11 06:12:17.721531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.721548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.721564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.721579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.721604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.721629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.721898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.721925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.722454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.722488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.722511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.722526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.722542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.722556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.722572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.722586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.722994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.723029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.723050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.723066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.723084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.723098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.723114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.723127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.723143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.723433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.723524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.723542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.723559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.723573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.723589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.723604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.723620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.723633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.724045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.724063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.724080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.724096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.724112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.724126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.724142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.724545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.724589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.724606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.724623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.724637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.724674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.724689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.724706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.724719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.724963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.724992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.725012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.725146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.725169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.725285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.725315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.725559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.725587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.725602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.725618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.725984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.726022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.726040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.726057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.846 [2024-07-11 06:12:17.726072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.726089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.726103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.726119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.726133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.726260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.726276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.726528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.726549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.726567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.726582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.726598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.726612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.726771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.726791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.727152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.727181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.727199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.727213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.727230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.727244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.727260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.727274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.727419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.727534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.727554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.727570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.727912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.727934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.727952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.727967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.727984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.728278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.728304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.728319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.728335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.728350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.728367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.728381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.846 [2024-07-11 06:12:17.728631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.846 [2024-07-11 06:12:17.728887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.728922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.728954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.729099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.729127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.729397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.729416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.729444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.729760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.729785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.729800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.729816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.730070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.730095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.730301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.730330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.730346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.730364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.730378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.730395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.730664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.730688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.730705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.730947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.730967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.730984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.730998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.731269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.731301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.731321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.731336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.731352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.731495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.731620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.731655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.731677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.731691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.731984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.732003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.732019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.732033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.732345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.732377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.732399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.732414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.732430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.732444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.732692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.732724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.732740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.732754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.732981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.733008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.733027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.733041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.733057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.733207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.733329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.733348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.733365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.733379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.733604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.733621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.733651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.733899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.733934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.733951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.733967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.733982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.734262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.847 [2024-07-11 06:12:17.734292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.734309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:26:01.847 [2024-07-11 06:12:17.734329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:01.847 [2024-07-11 06:12:17.734342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:01.847 [2024-07-11 06:12:17.734490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47888 len:8 PRP1 0x0 PRP2 0x0 00:26:01.847 [2024-07-11 06:12:17.734586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.735036] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:26:01.847 [2024-07-11 06:12:17.735368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.847 [2024-07-11 06:12:17.735411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.735430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.847 [2024-07-11 06:12:17.735444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.735459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.847 [2024-07-11 06:12:17.735614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.735739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.847 [2024-07-11 06:12:17.735756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.847 [2024-07-11 06:12:17.735770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:01.847 [2024-07-11 06:12:17.736285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.847 [2024-07-11 06:12:17.736351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:01.847 [2024-07-11 06:12:17.736710] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.847 [2024-07-11 06:12:17.736760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:01.847 [2024-07-11 06:12:17.736780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:01.847 [2024-07-11 06:12:17.736933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:01.847 [2024-07-11 06:12:17.737069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.847 [2024-07-11 06:12:17.737089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.847 [2024-07-11 06:12:17.737104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.847 [2024-07-11 06:12:17.737365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.847 [2024-07-11 06:12:17.737404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.847 06:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:03.223 [2024-07-11 06:12:18.737590] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.223 [2024-07-11 06:12:18.737707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:03.223 [2024-07-11 06:12:18.737734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:03.223 [2024-07-11 06:12:18.737769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:03.223 [2024-07-11 06:12:18.737799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.223 [2024-07-11 06:12:18.737815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.223 [2024-07-11 06:12:18.737831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.223 [2024-07-11 06:12:18.737868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.223 [2024-07-11 06:12:18.737885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.158 [2024-07-11 06:12:19.738119] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.158 [2024-07-11 06:12:19.738214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:04.158 [2024-07-11 06:12:19.738241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:04.158 [2024-07-11 06:12:19.738278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:04.158 [2024-07-11 06:12:19.738308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.158 [2024-07-11 06:12:19.738325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.158 [2024-07-11 06:12:19.738340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.159 [2024-07-11 06:12:19.738399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.159 [2024-07-11 06:12:19.738420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.094 [2024-07-11 06:12:20.741599] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.094 [2024-07-11 06:12:20.741702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:05.094 [2024-07-11 06:12:20.741730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:05.094 [2024-07-11 06:12:20.742271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:05.094 [2024-07-11 06:12:20.742759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.094 [2024-07-11 06:12:20.742802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.094 [2024-07-11 06:12:20.742821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.095 06:12:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.095 [2024-07-11 06:12:20.747464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.095 [2024-07-11 06:12:20.747529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.095 [2024-07-11 06:12:21.006956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.353 06:12:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 88467 00:26:05.920 [2024-07-11 06:12:21.797260] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:11.182 00:26:11.182 Latency(us) 00:26:11.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.182 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:11.182 Verification LBA range: start 0x0 length 0x4000 00:26:11.182 NVMe0n1 : 10.01 3908.75 15.27 3243.36 0.00 17853.69 808.03 3035150.89 00:26:11.182 =================================================================================================================== 00:26:11.182 Total : 3908.75 15.27 3243.36 0.00 17853.69 0.00 3035150.89 00:26:11.182 0 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 88336 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88336 ']' 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88336 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88336 00:26:11.182 killing process with pid 88336 00:26:11.182 Received shutdown signal, test time was about 10.000000 seconds 00:26:11.182 00:26:11.182 Latency(us) 00:26:11.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.182 =================================================================================================================== 00:26:11.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88336' 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88336 00:26:11.182 06:12:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88336 00:26:12.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:12.117 06:12:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=88585 00:26:12.117 06:12:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 88585 /var/tmp/bdevperf.sock 00:26:12.117 06:12:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 88585 ']' 00:26:12.117 06:12:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:12.117 06:12:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:26:12.117 06:12:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.117 06:12:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:12.117 06:12:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.117 06:12:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.117 [2024-07-11 06:12:27.844662] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:26:12.117 [2024-07-11 06:12:27.844859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88585 ] 00:26:12.117 [2024-07-11 06:12:28.018486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.375 [2024-07-11 06:12:28.246725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.633 [2024-07-11 06:12:28.440227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:12.891 06:12:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:12.891 06:12:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:12.891 06:12:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=88601 00:26:12.891 06:12:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:26:12.891 06:12:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:26:13.149 06:12:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:13.429 NVMe0n1 00:26:13.429 06:12:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=88638 00:26:13.429 06:12:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:13.429 06:12:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:26:13.691 Running I/O for 10 seconds... 00:26:14.625 06:12:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.887 [2024-07-11 06:12:30.599107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.887 [2024-07-11 06:12:30.599196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.887 [2024-07-11 06:12:30.599222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.887 [2024-07-11 06:12:30.599262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.887 [2024-07-11 06:12:30.599274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.887 [2024-07-11 06:12:30.599288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.887 [2024-07-11 06:12:30.599329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.887 [2024-07-11 06:12:30.599341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.887 [2024-07-11 06:12:30.599354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.887 [2024-07-11 06:12:30.599995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.600932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:26:14.888 [2024-07-11 06:12:30.601363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.888 [2024-07-11 06:12:30.601404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.888 [2024-07-11 06:12:30.601446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.888 [2024-07-11 06:12:30.601465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.888 [2024-07-11 06:12:30.601487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.888 [2024-07-11 06:12:30.601503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.888 [2024-07-11 06:12:30.601523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.888 [2024-07-11 06:12:30.601538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.888 [2024-07-11 06:12:30.601558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.888 [2024-07-11 06:12:30.601573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.888 [2024-07-11 06:12:30.601593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.888 [2024-07-11 06:12:30.601609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.888 [2024-07-11 06:12:30.601628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.888 [2024-07-11 06:12:30.601656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.888 [2024-07-11 06:12:30.601679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.888 [2024-07-11 06:12:30.601695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.888 [2024-07-11 06:12:30.601714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.888 [2024-07-11 06:12:30.601730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.601753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.601769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.601789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.601804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.601824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.601839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.601864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.601883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.601903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.601918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.601939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.601955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.601974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.601989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.602969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.602990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.603005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.603024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.603040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.603059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.603074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.603109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.603125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.603145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.603160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.603179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.603195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.603214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.603229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.603248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.603263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.603286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.603302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.889 [2024-07-11 06:12:30.603321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.889 [2024-07-11 06:12:30.603337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.603981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.603996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.890 [2024-07-11 06:12:30.604885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.890 [2024-07-11 06:12:30.604901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.604921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.604936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.604955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.604970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.604989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.605970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.605989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.606003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.606025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.606040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.606060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.891 [2024-07-11 06:12:30.606075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.891 [2024-07-11 06:12:30.606095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.892 [2024-07-11 06:12:30.606111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.892 [2024-07-11 06:12:30.606132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.892 [2024-07-11 06:12:30.606148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.892 [2024-07-11 06:12:30.606167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.892 [2024-07-11 06:12:30.606183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.892 [2024-07-11 06:12:30.606202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.892 [2024-07-11 06:12:30.606217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.892 [2024-07-11 06:12:30.606235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:26:14.892 [2024-07-11 06:12:30.606255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.892 [2024-07-11 06:12:30.606273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.892 [2024-07-11 06:12:30.606288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3440 len:8 PRP1 0x0 PRP2 0x0 00:26:14.892 [2024-07-11 06:12:30.606306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.892 [2024-07-11 06:12:30.606561] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:26:14.892 [2024-07-11 06:12:30.606899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:14.892 [2024-07-11 06:12:30.606944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:14.892 [2024-07-11 06:12:30.607103] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.892 [2024-07-11 06:12:30.607148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:14.892 [2024-07-11 06:12:30.607169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:14.892 [2024-07-11 06:12:30.607204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:14.892 [2024-07-11 06:12:30.607230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:14.892 [2024-07-11 06:12:30.607247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:14.892 [2024-07-11 06:12:30.607266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:14.892 [2024-07-11 06:12:30.607301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.892 [2024-07-11 06:12:30.607318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:14.892 06:12:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 88638 00:26:16.793 [2024-07-11 06:12:32.607683] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-07-11 06:12:32.607763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:16.793 [2024-07-11 06:12:32.607789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:16.793 [2024-07-11 06:12:32.607828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:16.793 [2024-07-11 06:12:32.607858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.793 [2024-07-11 06:12:32.607876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.793 [2024-07-11 06:12:32.607892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.793 [2024-07-11 06:12:32.607936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.793 [2024-07-11 06:12:32.607956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.694 [2024-07-11 06:12:34.608241] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.694 [2024-07-11 06:12:34.608322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:26:18.694 [2024-07-11 06:12:34.608346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:26:18.694 [2024-07-11 06:12:34.608388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:18.694 [2024-07-11 06:12:34.608419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.694 [2024-07-11 06:12:34.608436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.694 [2024-07-11 06:12:34.608454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.694 [2024-07-11 06:12:34.608501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.694 [2024-07-11 06:12:34.608519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.227 [2024-07-11 06:12:36.608635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.227 [2024-07-11 06:12:36.608710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.227 [2024-07-11 06:12:36.608737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.227 [2024-07-11 06:12:36.608754] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:21.227 [2024-07-11 06:12:36.608801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.793 00:26:21.793 Latency(us) 00:26:21.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.793 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:26:21.793 NVMe0n1 : 8.16 1524.46 5.95 15.68 0.00 83192.13 10724.07 7046430.72 00:26:21.793 =================================================================================================================== 00:26:21.793 Total : 1524.46 5.95 15.68 0.00 83192.13 10724.07 7046430.72 00:26:21.793 0 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:21.793 Attaching 5 probes... 00:26:21.793 1328.690704: reset bdev controller NVMe0 00:26:21.793 1328.805255: reconnect bdev controller NVMe0 00:26:21.793 3329.248428: reconnect delay bdev controller NVMe0 00:26:21.793 3329.287315: reconnect bdev controller NVMe0 00:26:21.793 5329.831824: reconnect delay bdev controller NVMe0 00:26:21.793 5329.871694: reconnect bdev controller NVMe0 00:26:21.793 7330.383612: reconnect delay bdev controller NVMe0 00:26:21.793 7330.409575: reconnect bdev controller NVMe0 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 88601 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 88585 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88585 ']' 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88585 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:21.793 06:12:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88585 00:26:21.793 killing process with pid 88585 00:26:21.793 Received shutdown signal, test time was about 8.227823 seconds 00:26:21.793 00:26:21.794 Latency(us) 00:26:21.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.794 =================================================================================================================== 00:26:21.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.794 06:12:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:21.794 06:12:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:21.794 06:12:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88585' 00:26:21.794 06:12:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88585 00:26:21.794 06:12:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88585 00:26:23.170 06:12:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.429 rmmod nvme_tcp 00:26:23.429 rmmod nvme_fabrics 00:26:23.429 rmmod nvme_keyring 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 88138 ']' 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 88138 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 88138 ']' 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 88138 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88138 00:26:23.429 killing process with pid 88138 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88138' 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 88138 00:26:23.429 06:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 88138 00:26:24.806 06:12:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:24.806 06:12:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:24.807 06:12:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:24.807 06:12:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.807 06:12:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:24.807 06:12:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.807 06:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.807 06:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.807 06:12:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:24.807 00:26:24.807 real 0m51.116s 00:26:24.807 user 2m28.250s 00:26:24.807 sys 0m5.663s 00:26:24.807 06:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:24.807 ************************************ 00:26:24.807 END TEST nvmf_timeout 00:26:24.807 ************************************ 00:26:24.807 06:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.065 06:12:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:25.065 06:12:40 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:26:25.065 06:12:40 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:26:25.065 06:12:40 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:25.065 06:12:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.065 06:12:40 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:26:25.065 00:26:25.065 real 16m13.429s 00:26:25.065 user 42m30.343s 00:26:25.065 sys 3m59.993s 00:26:25.065 06:12:40 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.065 06:12:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.065 ************************************ 00:26:25.065 END TEST nvmf_tcp 00:26:25.065 ************************************ 00:26:25.065 06:12:40 -- common/autotest_common.sh@1142 -- # return 0 00:26:25.065 06:12:40 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:26:25.065 06:12:40 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:25.065 06:12:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:25.065 06:12:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.065 06:12:40 -- common/autotest_common.sh@10 -- # set +x 00:26:25.065 ************************************ 00:26:25.065 START TEST nvmf_dif 00:26:25.065 ************************************ 00:26:25.065 06:12:40 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:25.065 * Looking for test storage... 00:26:25.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:25.065 06:12:40 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.065 06:12:40 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:25.065 06:12:40 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.066 06:12:40 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.066 06:12:40 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.066 06:12:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.066 06:12:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.066 06:12:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.066 06:12:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:25.066 06:12:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.066 06:12:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:25.066 06:12:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:25.066 06:12:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:25.066 06:12:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:25.066 06:12:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.066 06:12:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:25.066 06:12:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:25.066 06:12:40 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:25.324 06:12:40 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:25.324 Cannot find device "nvmf_tgt_br" 00:26:25.324 06:12:40 nvmf_dif -- nvmf/common.sh@155 -- # true 00:26:25.324 06:12:40 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:25.324 Cannot find device "nvmf_tgt_br2" 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@156 -- # true 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:25.324 Cannot find device "nvmf_tgt_br" 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@158 -- # true 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:25.324 Cannot find device "nvmf_tgt_br2" 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@159 -- # true 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:25.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@162 -- # true 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:25.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@163 -- # true 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:25.324 06:12:41 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:25.325 06:12:41 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:25.614 06:12:41 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:25.614 06:12:41 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:25.614 06:12:41 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:25.614 06:12:41 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:25.614 06:12:41 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:25.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:26:25.614 00:26:25.614 --- 10.0.0.2 ping statistics --- 00:26:25.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.614 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:26:25.614 06:12:41 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:25.614 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:25.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:26:25.615 00:26:25.615 --- 10.0.0.3 ping statistics --- 00:26:25.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.615 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:26:25.615 06:12:41 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:25.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:26:25.615 00:26:25.615 --- 10.0.0.1 ping statistics --- 00:26:25.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.615 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:26:25.615 06:12:41 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.615 06:12:41 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:26:25.615 06:12:41 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:25.615 06:12:41 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:25.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:25.873 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:25.873 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:25.873 06:12:41 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.873 06:12:41 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:25.873 06:12:41 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:25.873 06:12:41 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.873 06:12:41 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:25.873 06:12:41 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:25.873 06:12:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:25.873 06:12:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:25.873 06:12:41 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:25.873 06:12:41 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:25.873 06:12:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:25.874 06:12:41 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=89095 00:26:25.874 06:12:41 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 89095 00:26:25.874 06:12:41 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 89095 ']' 00:26:25.874 06:12:41 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:25.874 06:12:41 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.874 06:12:41 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:25.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.874 06:12:41 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.874 06:12:41 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:25.874 06:12:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:26.132 [2024-07-11 06:12:41.833029] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:26:26.132 [2024-07-11 06:12:41.833251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.132 [2024-07-11 06:12:42.012254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.391 [2024-07-11 06:12:42.245281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.391 [2024-07-11 06:12:42.245363] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.391 [2024-07-11 06:12:42.245384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.391 [2024-07-11 06:12:42.245401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.391 [2024-07-11 06:12:42.245414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.391 [2024-07-11 06:12:42.245460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.650 [2024-07-11 06:12:42.426415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:26.909 06:12:42 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.909 06:12:42 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:26.909 06:12:42 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:26.909 06:12:42 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:26.909 06:12:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:26.909 06:12:42 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.909 06:12:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:26.909 06:12:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:26.909 06:12:42 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.909 06:12:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:26.909 [2024-07-11 06:12:42.820988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.909 06:12:42 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.909 06:12:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:26.909 06:12:42 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:26.909 06:12:42 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.909 06:12:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:27.168 ************************************ 00:26:27.168 START TEST fio_dif_1_default 00:26:27.168 ************************************ 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:27.168 bdev_null0 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:27.168 [2024-07-11 06:12:42.865117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.168 { 00:26:27.168 "params": { 00:26:27.168 "name": "Nvme$subsystem", 00:26:27.168 "trtype": "$TEST_TRANSPORT", 00:26:27.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.168 "adrfam": "ipv4", 00:26:27.168 "trsvcid": "$NVMF_PORT", 00:26:27.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.168 "hdgst": ${hdgst:-false}, 00:26:27.168 "ddgst": ${ddgst:-false} 00:26:27.168 }, 00:26:27.168 "method": "bdev_nvme_attach_controller" 00:26:27.168 } 00:26:27.168 EOF 00:26:27.168 )") 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:27.168 "params": { 00:26:27.168 "name": "Nvme0", 00:26:27.168 "trtype": "tcp", 00:26:27.168 "traddr": "10.0.0.2", 00:26:27.168 "adrfam": "ipv4", 00:26:27.168 "trsvcid": "4420", 00:26:27.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:27.168 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:27.168 "hdgst": false, 00:26:27.168 "ddgst": false 00:26:27.168 }, 00:26:27.168 "method": "bdev_nvme_attach_controller" 00:26:27.168 }' 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:27.168 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:27.169 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:26:27.169 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:27.169 06:12:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.427 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:27.427 fio-3.35 00:26:27.428 Starting 1 thread 00:26:39.636 00:26:39.636 filename0: (groupid=0, jobs=1): err= 0: pid=89154: Thu Jul 11 06:12:53 2024 00:26:39.636 read: IOPS=6589, BW=25.7MiB/s (27.0MB/s)(257MiB/10001msec) 00:26:39.636 slat (usec): min=5, max=118, avg=11.84, stdev= 6.11 00:26:39.636 clat (usec): min=408, max=1587, avg=571.07, stdev=77.59 00:26:39.636 lat (usec): min=415, max=1600, avg=582.91, stdev=79.17 00:26:39.636 clat percentiles (usec): 00:26:39.636 | 1.00th=[ 437], 5.00th=[ 453], 10.00th=[ 469], 20.00th=[ 494], 00:26:39.636 | 30.00th=[ 523], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 594], 00:26:39.636 | 70.00th=[ 619], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 693], 00:26:39.636 | 99.00th=[ 750], 99.50th=[ 791], 99.90th=[ 906], 99.95th=[ 971], 00:26:39.636 | 99.99th=[ 1106] 00:26:39.636 bw ( KiB/s): min=23968, max=30784, per=100.00%, avg=26472.42, stdev=2281.02, samples=19 00:26:39.636 iops : min= 5992, max= 7696, avg=6618.11, stdev=570.25, samples=19 00:26:39.636 lat (usec) : 500=22.05%, 750=76.91%, 1000=1.00% 00:26:39.636 lat (msec) : 2=0.03% 00:26:39.636 cpu : usr=86.04%, sys=12.00%, ctx=19, majf=0, minf=1075 00:26:39.636 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:39.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.636 issued rwts: total=65900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.636 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:39.636 00:26:39.636 Run status group 0 (all jobs): 00:26:39.636 READ: bw=25.7MiB/s (27.0MB/s), 25.7MiB/s-25.7MiB/s (27.0MB/s-27.0MB/s), io=257MiB (270MB), run=10001-10001msec 00:26:39.636 ----------------------------------------------------- 00:26:39.636 Suppressions used: 00:26:39.636 count bytes template 00:26:39.636 1 8 /usr/src/fio/parse.c 00:26:39.636 1 8 libtcmalloc_minimal.so 00:26:39.636 1 904 libcrypto.so 00:26:39.636 ----------------------------------------------------- 00:26:39.636 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.636 00:26:39.636 real 0m12.281s 00:26:39.636 user 0m10.480s 00:26:39.636 sys 0m1.530s 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 ************************************ 00:26:39.636 END TEST fio_dif_1_default 00:26:39.636 ************************************ 00:26:39.636 06:12:55 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:39.636 06:12:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:39.636 06:12:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:39.636 06:12:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 ************************************ 00:26:39.636 START TEST fio_dif_1_multi_subsystems 00:26:39.636 ************************************ 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 bdev_null0 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 [2024-07-11 06:12:55.199452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 bdev_null1 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:39.636 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.637 { 00:26:39.637 "params": { 00:26:39.637 "name": "Nvme$subsystem", 00:26:39.637 "trtype": "$TEST_TRANSPORT", 00:26:39.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.637 "adrfam": "ipv4", 00:26:39.637 "trsvcid": "$NVMF_PORT", 00:26:39.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.637 "hdgst": ${hdgst:-false}, 00:26:39.637 "ddgst": ${ddgst:-false} 00:26:39.637 }, 00:26:39.637 "method": "bdev_nvme_attach_controller" 00:26:39.637 } 00:26:39.637 EOF 00:26:39.637 )") 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.637 { 00:26:39.637 "params": { 00:26:39.637 "name": "Nvme$subsystem", 00:26:39.637 "trtype": "$TEST_TRANSPORT", 00:26:39.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.637 "adrfam": "ipv4", 00:26:39.637 "trsvcid": "$NVMF_PORT", 00:26:39.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.637 "hdgst": ${hdgst:-false}, 00:26:39.637 "ddgst": ${ddgst:-false} 00:26:39.637 }, 00:26:39.637 "method": "bdev_nvme_attach_controller" 00:26:39.637 } 00:26:39.637 EOF 00:26:39.637 )") 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:39.637 "params": { 00:26:39.637 "name": "Nvme0", 00:26:39.637 "trtype": "tcp", 00:26:39.637 "traddr": "10.0.0.2", 00:26:39.637 "adrfam": "ipv4", 00:26:39.637 "trsvcid": "4420", 00:26:39.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:39.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:39.637 "hdgst": false, 00:26:39.637 "ddgst": false 00:26:39.637 }, 00:26:39.637 "method": "bdev_nvme_attach_controller" 00:26:39.637 },{ 00:26:39.637 "params": { 00:26:39.637 "name": "Nvme1", 00:26:39.637 "trtype": "tcp", 00:26:39.637 "traddr": "10.0.0.2", 00:26:39.637 "adrfam": "ipv4", 00:26:39.637 "trsvcid": "4420", 00:26:39.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:39.637 "hdgst": false, 00:26:39.637 "ddgst": false 00:26:39.637 }, 00:26:39.637 "method": "bdev_nvme_attach_controller" 00:26:39.637 }' 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:39.637 06:12:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:39.637 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:39.637 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:39.637 fio-3.35 00:26:39.637 Starting 2 threads 00:26:51.867 00:26:51.867 filename0: (groupid=0, jobs=1): err= 0: pid=89322: Thu Jul 11 06:13:06 2024 00:26:51.867 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(139MiB/10001msec) 00:26:51.867 slat (nsec): min=5469, max=75671, avg=18012.25, stdev=6691.98 00:26:51.867 clat (usec): min=861, max=3433, avg=1076.54, stdev=74.20 00:26:51.867 lat (usec): min=896, max=3455, avg=1094.55, stdev=75.25 00:26:51.867 clat percentiles (usec): 00:26:51.867 | 1.00th=[ 938], 5.00th=[ 971], 10.00th=[ 988], 20.00th=[ 1012], 00:26:51.867 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:26:51.867 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:26:51.867 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1336], 99.95th=[ 1483], 00:26:51.867 | 99.99th=[ 3425] 00:26:51.867 bw ( KiB/s): min=13792, max=14432, per=50.01%, avg=14197.89, stdev=150.87, samples=19 00:26:51.867 iops : min= 3448, max= 3608, avg=3549.47, stdev=37.72, samples=19 00:26:51.867 lat (usec) : 1000=13.74% 00:26:51.867 lat (msec) : 2=86.25%, 4=0.01% 00:26:51.867 cpu : usr=90.73%, sys=7.72%, ctx=19, majf=0, minf=1075 00:26:51.867 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.867 issued rwts: total=35488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.867 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:51.867 filename1: (groupid=0, jobs=1): err= 0: pid=89323: Thu Jul 11 06:13:06 2024 00:26:51.867 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(139MiB/10001msec) 00:26:51.867 slat (nsec): min=8345, max=64223, avg=17619.77, stdev=6332.37 00:26:51.867 clat (usec): min=830, max=3950, avg=1078.91, stdev=80.24 00:26:51.867 lat (usec): min=839, max=3993, avg=1096.53, stdev=81.28 00:26:51.867 clat percentiles (usec): 00:26:51.867 | 1.00th=[ 922], 5.00th=[ 963], 10.00th=[ 988], 20.00th=[ 1012], 00:26:51.867 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:26:51.867 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:26:51.867 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1352], 99.95th=[ 1483], 00:26:51.867 | 99.99th=[ 3916] 00:26:51.867 bw ( KiB/s): min=13764, max=14432, per=50.01%, avg=14196.42, stdev=154.40, samples=19 00:26:51.867 iops : min= 3441, max= 3608, avg=3549.11, stdev=38.60, samples=19 00:26:51.867 lat (usec) : 1000=14.19% 00:26:51.867 lat (msec) : 2=85.80%, 4=0.01% 00:26:51.867 cpu : usr=90.46%, sys=8.09%, ctx=19, majf=0, minf=1075 00:26:51.867 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.867 issued rwts: total=35488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.867 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:51.867 00:26:51.867 Run status group 0 (all jobs): 00:26:51.867 READ: bw=27.7MiB/s (29.1MB/s), 13.9MiB/s-13.9MiB/s (14.5MB/s-14.5MB/s), io=277MiB (291MB), run=10001-10001msec 00:26:51.867 ----------------------------------------------------- 00:26:51.867 Suppressions used: 00:26:51.867 count bytes template 00:26:51.867 2 16 /usr/src/fio/parse.c 00:26:51.867 1 8 libtcmalloc_minimal.so 00:26:51.867 1 904 libcrypto.so 00:26:51.867 ----------------------------------------------------- 00:26:51.867 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:51.867 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:51.868 ************************************ 00:26:51.868 END TEST fio_dif_1_multi_subsystems 00:26:51.868 ************************************ 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.868 00:26:51.868 real 0m12.585s 00:26:51.868 user 0m20.208s 00:26:51.868 sys 0m1.991s 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.868 06:13:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:52.127 06:13:07 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:52.127 06:13:07 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:52.127 06:13:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:52.127 06:13:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.127 06:13:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:52.127 ************************************ 00:26:52.127 START TEST fio_dif_rand_params 00:26:52.127 ************************************ 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.127 bdev_null0 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:52.127 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.128 [2024-07-11 06:13:07.840443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.128 { 00:26:52.128 "params": { 00:26:52.128 "name": "Nvme$subsystem", 00:26:52.128 "trtype": "$TEST_TRANSPORT", 00:26:52.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.128 "adrfam": "ipv4", 00:26:52.128 "trsvcid": "$NVMF_PORT", 00:26:52.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.128 "hdgst": ${hdgst:-false}, 00:26:52.128 "ddgst": ${ddgst:-false} 00:26:52.128 }, 00:26:52.128 "method": "bdev_nvme_attach_controller" 00:26:52.128 } 00:26:52.128 EOF 00:26:52.128 )") 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:52.128 "params": { 00:26:52.128 "name": "Nvme0", 00:26:52.128 "trtype": "tcp", 00:26:52.128 "traddr": "10.0.0.2", 00:26:52.128 "adrfam": "ipv4", 00:26:52.128 "trsvcid": "4420", 00:26:52.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:52.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:52.128 "hdgst": false, 00:26:52.128 "ddgst": false 00:26:52.128 }, 00:26:52.128 "method": "bdev_nvme_attach_controller" 00:26:52.128 }' 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:52.128 06:13:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:52.387 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:52.387 ... 00:26:52.387 fio-3.35 00:26:52.387 Starting 3 threads 00:26:58.953 00:26:58.953 filename0: (groupid=0, jobs=1): err= 0: pid=89479: Thu Jul 11 06:13:13 2024 00:26:58.953 read: IOPS=197, BW=24.6MiB/s (25.8MB/s)(123MiB/5007msec) 00:26:58.954 slat (nsec): min=5870, max=65262, avg=16229.44, stdev=8740.03 00:26:58.954 clat (usec): min=13605, max=19932, avg=15175.72, stdev=393.35 00:26:58.954 lat (usec): min=13615, max=19978, avg=15191.95, stdev=394.03 00:26:58.954 clat percentiles (usec): 00:26:58.954 | 1.00th=[14484], 5.00th=[14746], 10.00th=[14877], 20.00th=[14877], 00:26:58.954 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15139], 60.00th=[15270], 00:26:58.954 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15533], 95.00th=[15664], 00:26:58.954 | 99.00th=[15926], 99.50th=[16057], 99.90th=[20055], 99.95th=[20055], 00:26:58.954 | 99.99th=[20055] 00:26:58.954 bw ( KiB/s): min=24576, max=25344, per=33.29%, avg=25195.30, stdev=313.70, samples=10 00:26:58.954 iops : min= 192, max= 198, avg=196.80, stdev= 2.53, samples=10 00:26:58.954 lat (msec) : 20=100.00% 00:26:58.954 cpu : usr=91.23%, sys=8.03%, ctx=14, majf=0, minf=1074 00:26:58.954 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:58.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.954 issued rwts: total=987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:58.954 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:58.954 filename0: (groupid=0, jobs=1): err= 0: pid=89480: Thu Jul 11 06:13:13 2024 00:26:58.954 read: IOPS=197, BW=24.6MiB/s (25.8MB/s)(123MiB/5008msec) 00:26:58.954 slat (nsec): min=4205, max=54600, avg=16482.75, stdev=8755.28 00:26:58.954 clat (usec): min=14330, max=19099, avg=15177.97, stdev=367.32 00:26:58.954 lat (usec): min=14340, max=19122, avg=15194.45, stdev=367.57 00:26:58.954 clat percentiles (usec): 00:26:58.954 | 1.00th=[14484], 5.00th=[14746], 10.00th=[14877], 20.00th=[14877], 00:26:58.954 | 30.00th=[15008], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:26:58.954 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15533], 95.00th=[15795], 00:26:58.954 | 99.00th=[15926], 99.50th=[16057], 99.90th=[19006], 99.95th=[19006], 00:26:58.954 | 99.99th=[19006] 00:26:58.954 bw ( KiB/s): min=24576, max=25344, per=33.28%, avg=25190.40, stdev=323.82, samples=10 00:26:58.954 iops : min= 192, max= 198, avg=196.80, stdev= 2.53, samples=10 00:26:58.954 lat (msec) : 20=100.00% 00:26:58.954 cpu : usr=91.17%, sys=8.05%, ctx=19, majf=0, minf=1075 00:26:58.954 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:58.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.954 issued rwts: total=987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:58.954 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:58.954 filename0: (groupid=0, jobs=1): err= 0: pid=89481: Thu Jul 11 06:13:13 2024 00:26:58.954 read: IOPS=197, BW=24.6MiB/s (25.8MB/s)(123MiB/5006msec) 00:26:58.954 slat (nsec): min=5582, max=53818, avg=15159.72, stdev=7403.91 00:26:58.954 clat (usec): min=11599, max=20663, avg=15174.55, stdev=458.63 00:26:58.954 lat (usec): min=11609, max=20689, avg=15189.71, stdev=458.84 00:26:58.954 clat percentiles (usec): 00:26:58.954 | 1.00th=[14484], 5.00th=[14746], 10.00th=[14877], 20.00th=[14877], 00:26:58.954 | 30.00th=[15008], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:26:58.954 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15533], 95.00th=[15664], 00:26:58.954 | 99.00th=[15926], 99.50th=[16057], 99.90th=[20579], 99.95th=[20579], 00:26:58.954 | 99.99th=[20579] 00:26:58.954 bw ( KiB/s): min=24576, max=25344, per=33.37%, avg=25258.67, stdev=256.00, samples=9 00:26:58.954 iops : min= 192, max= 198, avg=197.33, stdev= 2.00, samples=9 00:26:58.954 lat (msec) : 20=99.70%, 50=0.30% 00:26:58.954 cpu : usr=92.61%, sys=6.71%, ctx=10, majf=0, minf=1073 00:26:58.954 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:58.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.954 issued rwts: total=987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:58.954 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:58.954 00:26:58.954 Run status group 0 (all jobs): 00:26:58.954 READ: bw=73.9MiB/s (77.5MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=370MiB (388MB), run=5006-5008msec 00:26:59.213 ----------------------------------------------------- 00:26:59.213 Suppressions used: 00:26:59.213 count bytes template 00:26:59.213 5 44 /usr/src/fio/parse.c 00:26:59.213 1 8 libtcmalloc_minimal.so 00:26:59.213 1 904 libcrypto.so 00:26:59.213 ----------------------------------------------------- 00:26:59.213 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:59.213 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.473 bdev_null0 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.473 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.474 [2024-07-11 06:13:15.164173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.474 bdev_null1 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.474 bdev_null2 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.474 { 00:26:59.474 "params": { 00:26:59.474 "name": "Nvme$subsystem", 00:26:59.474 "trtype": "$TEST_TRANSPORT", 00:26:59.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.474 "adrfam": "ipv4", 00:26:59.474 "trsvcid": "$NVMF_PORT", 00:26:59.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.474 "hdgst": ${hdgst:-false}, 00:26:59.474 "ddgst": ${ddgst:-false} 00:26:59.474 }, 00:26:59.474 "method": "bdev_nvme_attach_controller" 00:26:59.474 } 00:26:59.474 EOF 00:26:59.474 )") 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.474 { 00:26:59.474 "params": { 00:26:59.474 "name": "Nvme$subsystem", 00:26:59.474 "trtype": "$TEST_TRANSPORT", 00:26:59.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.474 "adrfam": "ipv4", 00:26:59.474 "trsvcid": "$NVMF_PORT", 00:26:59.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.474 "hdgst": ${hdgst:-false}, 00:26:59.474 "ddgst": ${ddgst:-false} 00:26:59.474 }, 00:26:59.474 "method": "bdev_nvme_attach_controller" 00:26:59.474 } 00:26:59.474 EOF 00:26:59.474 )") 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.474 { 00:26:59.474 "params": { 00:26:59.474 "name": "Nvme$subsystem", 00:26:59.474 "trtype": "$TEST_TRANSPORT", 00:26:59.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.474 "adrfam": "ipv4", 00:26:59.474 "trsvcid": "$NVMF_PORT", 00:26:59.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.474 "hdgst": ${hdgst:-false}, 00:26:59.474 "ddgst": ${ddgst:-false} 00:26:59.474 }, 00:26:59.474 "method": "bdev_nvme_attach_controller" 00:26:59.474 } 00:26:59.474 EOF 00:26:59.474 )") 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:59.474 06:13:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:59.474 "params": { 00:26:59.474 "name": "Nvme0", 00:26:59.474 "trtype": "tcp", 00:26:59.474 "traddr": "10.0.0.2", 00:26:59.475 "adrfam": "ipv4", 00:26:59.475 "trsvcid": "4420", 00:26:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:59.475 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:59.475 "hdgst": false, 00:26:59.475 "ddgst": false 00:26:59.475 }, 00:26:59.475 "method": "bdev_nvme_attach_controller" 00:26:59.475 },{ 00:26:59.475 "params": { 00:26:59.475 "name": "Nvme1", 00:26:59.475 "trtype": "tcp", 00:26:59.475 "traddr": "10.0.0.2", 00:26:59.475 "adrfam": "ipv4", 00:26:59.475 "trsvcid": "4420", 00:26:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.475 "hdgst": false, 00:26:59.475 "ddgst": false 00:26:59.475 }, 00:26:59.475 "method": "bdev_nvme_attach_controller" 00:26:59.475 },{ 00:26:59.475 "params": { 00:26:59.475 "name": "Nvme2", 00:26:59.475 "trtype": "tcp", 00:26:59.475 "traddr": "10.0.0.2", 00:26:59.475 "adrfam": "ipv4", 00:26:59.475 "trsvcid": "4420", 00:26:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:59.475 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:59.475 "hdgst": false, 00:26:59.475 "ddgst": false 00:26:59.475 }, 00:26:59.475 "method": "bdev_nvme_attach_controller" 00:26:59.475 }' 00:26:59.475 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:59.475 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:59.475 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:26:59.475 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:59.475 06:13:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.735 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:59.735 ... 00:26:59.735 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:59.735 ... 00:26:59.735 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:59.735 ... 00:26:59.735 fio-3.35 00:26:59.735 Starting 24 threads 00:27:11.933 00:27:11.933 filename0: (groupid=0, jobs=1): err= 0: pid=89579: Thu Jul 11 06:13:26 2024 00:27:11.933 read: IOPS=194, BW=779KiB/s (798kB/s)(7828KiB/10046msec) 00:27:11.933 slat (usec): min=4, max=8039, avg=25.72, stdev=222.26 00:27:11.933 clat (msec): min=29, max=192, avg=81.92, stdev=22.25 00:27:11.933 lat (msec): min=29, max=192, avg=81.94, stdev=22.25 00:27:11.933 clat percentiles (msec): 00:27:11.933 | 1.00th=[ 41], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 63], 00:27:11.933 | 30.00th=[ 69], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 88], 00:27:11.933 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 125], 00:27:11.933 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 192], 99.95th=[ 192], 00:27:11.933 | 99.99th=[ 192] 00:27:11.933 bw ( KiB/s): min= 576, max= 920, per=4.32%, avg=778.45, stdev=90.41, samples=20 00:27:11.933 iops : min= 144, max= 230, avg=194.60, stdev=22.59, samples=20 00:27:11.933 lat (msec) : 50=3.88%, 100=81.66%, 250=14.46% 00:27:11.933 cpu : usr=39.68%, sys=2.33%, ctx=1360, majf=0, minf=1075 00:27:11.933 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:27:11.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 complete : 0=0.0%, 4=87.8%, 8=11.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 issued rwts: total=1957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.933 filename0: (groupid=0, jobs=1): err= 0: pid=89580: Thu Jul 11 06:13:26 2024 00:27:11.933 read: IOPS=193, BW=775KiB/s (793kB/s)(7808KiB/10078msec) 00:27:11.933 slat (usec): min=6, max=8035, avg=25.21, stdev=211.61 00:27:11.933 clat (msec): min=11, max=151, avg=82.32, stdev=22.12 00:27:11.933 lat (msec): min=11, max=151, avg=82.34, stdev=22.12 00:27:11.933 clat percentiles (msec): 00:27:11.933 | 1.00th=[ 14], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 64], 00:27:11.933 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 89], 00:27:11.933 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 121], 00:27:11.933 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 153], 99.95th=[ 153], 00:27:11.933 | 99.99th=[ 153] 00:27:11.933 bw ( KiB/s): min= 656, max= 1248, per=4.30%, avg=774.40, stdev=125.71, samples=20 00:27:11.933 iops : min= 164, max= 312, avg=193.60, stdev=31.43, samples=20 00:27:11.933 lat (msec) : 20=3.07%, 50=2.00%, 100=79.97%, 250=14.96% 00:27:11.933 cpu : usr=39.95%, sys=2.37%, ctx=1108, majf=0, minf=1074 00:27:11.933 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:11.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 complete : 0=0.0%, 4=89.1%, 8=9.4%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.933 filename0: (groupid=0, jobs=1): err= 0: pid=89581: Thu Jul 11 06:13:26 2024 00:27:11.933 read: IOPS=183, BW=734KiB/s (751kB/s)(7364KiB/10036msec) 00:27:11.933 slat (usec): min=4, max=8039, avg=24.16, stdev=205.98 00:27:11.933 clat (msec): min=46, max=187, avg=86.96, stdev=20.90 00:27:11.933 lat (msec): min=46, max=187, avg=86.99, stdev=20.89 00:27:11.933 clat percentiles (msec): 00:27:11.933 | 1.00th=[ 51], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 67], 00:27:11.933 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 91], 00:27:11.933 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 113], 95.00th=[ 122], 00:27:11.933 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 188], 99.95th=[ 188], 00:27:11.933 | 99.99th=[ 188] 00:27:11.933 bw ( KiB/s): min= 624, max= 872, per=4.06%, avg=731.05, stdev=82.34, samples=20 00:27:11.933 iops : min= 156, max= 218, avg=182.75, stdev=20.57, samples=20 00:27:11.933 lat (msec) : 50=0.81%, 100=78.06%, 250=21.13% 00:27:11.933 cpu : usr=41.54%, sys=2.57%, ctx=1483, majf=0, minf=1075 00:27:11.933 IO depths : 1=0.1%, 2=2.6%, 4=10.4%, 8=72.6%, 16=14.4%, 32=0.0%, >=64=0.0% 00:27:11.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 complete : 0=0.0%, 4=89.8%, 8=7.9%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 issued rwts: total=1841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.933 filename0: (groupid=0, jobs=1): err= 0: pid=89582: Thu Jul 11 06:13:26 2024 00:27:11.933 read: IOPS=191, BW=765KiB/s (783kB/s)(7704KiB/10071msec) 00:27:11.933 slat (usec): min=5, max=8039, avg=20.45, stdev=182.91 00:27:11.933 clat (msec): min=34, max=145, avg=83.34, stdev=19.69 00:27:11.933 lat (msec): min=34, max=145, avg=83.36, stdev=19.70 00:27:11.933 clat percentiles (msec): 00:27:11.933 | 1.00th=[ 37], 5.00th=[ 51], 10.00th=[ 60], 20.00th=[ 62], 00:27:11.933 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 88], 00:27:11.933 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:27:11.933 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:27:11.933 | 99.99th=[ 146] 00:27:11.933 bw ( KiB/s): min= 640, max= 840, per=4.25%, avg=766.50, stdev=56.56, samples=20 00:27:11.933 iops : min= 160, max= 210, avg=191.60, stdev=14.20, samples=20 00:27:11.933 lat (msec) : 50=4.98%, 100=82.55%, 250=12.46% 00:27:11.933 cpu : usr=31.29%, sys=1.91%, ctx=916, majf=0, minf=1075 00:27:11.933 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:27:11.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 issued rwts: total=1926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.933 filename0: (groupid=0, jobs=1): err= 0: pid=89583: Thu Jul 11 06:13:26 2024 00:27:11.933 read: IOPS=189, BW=756KiB/s (775kB/s)(7628KiB/10085msec) 00:27:11.933 slat (usec): min=6, max=10033, avg=33.41, stdev=355.74 00:27:11.933 clat (msec): min=9, max=170, avg=84.25, stdev=23.16 00:27:11.933 lat (msec): min=9, max=170, avg=84.28, stdev=23.16 00:27:11.933 clat percentiles (msec): 00:27:11.933 | 1.00th=[ 11], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 66], 00:27:11.933 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 91], 00:27:11.933 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 123], 00:27:11.933 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 153], 99.95th=[ 171], 00:27:11.933 | 99.99th=[ 171] 00:27:11.933 bw ( KiB/s): min= 640, max= 1154, per=4.20%, avg=756.15, stdev=112.62, samples=20 00:27:11.933 iops : min= 160, max= 288, avg=189.00, stdev=28.06, samples=20 00:27:11.933 lat (msec) : 10=0.63%, 20=1.52%, 50=1.99%, 100=78.03%, 250=17.83% 00:27:11.933 cpu : usr=41.26%, sys=2.38%, ctx=1396, majf=0, minf=1074 00:27:11.933 IO depths : 1=0.2%, 2=2.5%, 4=9.6%, 8=73.1%, 16=14.6%, 32=0.0%, >=64=0.0% 00:27:11.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 complete : 0=0.0%, 4=89.8%, 8=8.1%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 issued rwts: total=1907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.933 filename0: (groupid=0, jobs=1): err= 0: pid=89584: Thu Jul 11 06:13:26 2024 00:27:11.933 read: IOPS=213, BW=854KiB/s (875kB/s)(8620KiB/10090msec) 00:27:11.933 slat (nsec): min=5282, max=55660, avg=15205.73, stdev=6241.19 00:27:11.933 clat (usec): min=895, max=167210, avg=74614.85, stdev=44103.18 00:27:11.933 lat (usec): min=908, max=167229, avg=74630.06, stdev=44103.66 00:27:11.933 clat percentiles (usec): 00:27:11.933 | 1.00th=[ 1975], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 4948], 00:27:11.933 | 30.00th=[ 71828], 40.00th=[ 83362], 50.00th=[ 87557], 60.00th=[ 93848], 00:27:11.933 | 70.00th=[ 95945], 80.00th=[107480], 90.00th=[120062], 95.00th=[131597], 00:27:11.933 | 99.00th=[156238], 99.50th=[156238], 99.90th=[158335], 99.95th=[166724], 00:27:11.933 | 99.99th=[166724] 00:27:11.933 bw ( KiB/s): min= 512, max= 4864, per=4.75%, avg=855.60, stdev=946.58, samples=20 00:27:11.933 iops : min= 128, max= 1216, avg=213.90, stdev=236.64, samples=20 00:27:11.933 lat (usec) : 1000=0.09% 00:27:11.933 lat (msec) : 2=1.44%, 4=16.47%, 10=4.18%, 20=2.23%, 50=1.39% 00:27:11.933 lat (msec) : 100=47.52%, 250=26.68% 00:27:11.933 cpu : usr=38.36%, sys=2.43%, ctx=958, majf=0, minf=1073 00:27:11.933 IO depths : 1=0.9%, 2=6.3%, 4=21.8%, 8=58.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:27:11.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 complete : 0=0.0%, 4=93.5%, 8=1.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.933 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.933 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.934 filename0: (groupid=0, jobs=1): err= 0: pid=89585: Thu Jul 11 06:13:26 2024 00:27:11.934 read: IOPS=175, BW=703KiB/s (719kB/s)(7060KiB/10048msec) 00:27:11.934 slat (usec): min=4, max=12043, avg=33.95, stdev=385.27 00:27:11.934 clat (msec): min=49, max=154, avg=90.70, stdev=17.86 00:27:11.934 lat (msec): min=49, max=154, avg=90.74, stdev=17.85 00:27:11.934 clat percentiles (msec): 00:27:11.934 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 80], 00:27:11.934 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 94], 00:27:11.934 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 114], 95.00th=[ 121], 00:27:11.934 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:27:11.934 | 99.99th=[ 155] 00:27:11.934 bw ( KiB/s): min= 528, max= 824, per=3.89%, avg=700.85, stdev=79.02, samples=20 00:27:11.934 iops : min= 132, max= 206, avg=175.20, stdev=19.74, samples=20 00:27:11.934 lat (msec) : 50=0.17%, 100=81.25%, 250=18.58% 00:27:11.934 cpu : usr=39.55%, sys=2.00%, ctx=1101, majf=0, minf=1075 00:27:11.934 IO depths : 1=0.1%, 2=4.4%, 4=17.4%, 8=64.7%, 16=13.5%, 32=0.0%, >=64=0.0% 00:27:11.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 complete : 0=0.0%, 4=92.0%, 8=4.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 issued rwts: total=1765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.934 filename0: (groupid=0, jobs=1): err= 0: pid=89586: Thu Jul 11 06:13:26 2024 00:27:11.934 read: IOPS=189, BW=758KiB/s (776kB/s)(7624KiB/10060msec) 00:27:11.934 slat (usec): min=4, max=8046, avg=29.65, stdev=313.13 00:27:11.934 clat (msec): min=31, max=156, avg=84.21, stdev=20.99 00:27:11.934 lat (msec): min=31, max=156, avg=84.24, stdev=20.99 00:27:11.934 clat percentiles (msec): 00:27:11.934 | 1.00th=[ 43], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 64], 00:27:11.934 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 89], 00:27:11.934 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 122], 00:27:11.934 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:27:11.934 | 99.99th=[ 157] 00:27:11.934 bw ( KiB/s): min= 592, max= 840, per=4.20%, avg=756.10, stdev=67.93, samples=20 00:27:11.934 iops : min= 148, max= 210, avg=189.00, stdev=16.99, samples=20 00:27:11.934 lat (msec) : 50=2.52%, 100=81.11%, 250=16.37% 00:27:11.934 cpu : usr=39.93%, sys=2.41%, ctx=1469, majf=0, minf=1073 00:27:11.934 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:11.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 issued rwts: total=1906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.934 filename1: (groupid=0, jobs=1): err= 0: pid=89587: Thu Jul 11 06:13:26 2024 00:27:11.934 read: IOPS=167, BW=672KiB/s (688kB/s)(6768KiB/10078msec) 00:27:11.934 slat (usec): min=6, max=4033, avg=24.28, stdev=169.09 00:27:11.934 clat (msec): min=55, max=168, avg=94.87, stdev=17.59 00:27:11.934 lat (msec): min=55, max=168, avg=94.90, stdev=17.59 00:27:11.934 clat percentiles (msec): 00:27:11.934 | 1.00th=[ 61], 5.00th=[ 70], 10.00th=[ 78], 20.00th=[ 83], 00:27:11.934 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 95], 00:27:11.934 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 128], 00:27:11.934 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 00:27:11.934 | 99.99th=[ 169] 00:27:11.934 bw ( KiB/s): min= 552, max= 768, per=3.73%, avg=672.60, stdev=58.10, samples=20 00:27:11.934 iops : min= 138, max= 192, avg=168.15, stdev=14.53, samples=20 00:27:11.934 lat (msec) : 100=72.87%, 250=27.13% 00:27:11.934 cpu : usr=42.23%, sys=2.09%, ctx=1387, majf=0, minf=1073 00:27:11.934 IO depths : 1=0.1%, 2=5.2%, 4=20.7%, 8=60.8%, 16=13.2%, 32=0.0%, >=64=0.0% 00:27:11.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 complete : 0=0.0%, 4=93.2%, 8=2.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 issued rwts: total=1692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.934 filename1: (groupid=0, jobs=1): err= 0: pid=89588: Thu Jul 11 06:13:26 2024 00:27:11.934 read: IOPS=180, BW=724KiB/s (741kB/s)(7252KiB/10021msec) 00:27:11.934 slat (usec): min=5, max=4031, avg=20.91, stdev=122.09 00:27:11.934 clat (msec): min=32, max=170, avg=88.30, stdev=20.51 00:27:11.934 lat (msec): min=32, max=170, avg=88.32, stdev=20.51 00:27:11.934 clat percentiles (msec): 00:27:11.934 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 69], 00:27:11.934 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 92], 00:27:11.934 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 113], 95.00th=[ 126], 00:27:11.934 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 171], 99.95th=[ 171], 00:27:11.934 | 99.99th=[ 171] 00:27:11.934 bw ( KiB/s): min= 608, max= 872, per=3.99%, avg=718.85, stdev=81.13, samples=20 00:27:11.934 iops : min= 152, max= 218, avg=179.70, stdev=20.27, samples=20 00:27:11.934 lat (msec) : 50=1.60%, 100=76.72%, 250=21.68% 00:27:11.934 cpu : usr=43.04%, sys=2.35%, ctx=1396, majf=0, minf=1074 00:27:11.934 IO depths : 1=0.1%, 2=3.1%, 4=12.5%, 8=70.2%, 16=14.1%, 32=0.0%, >=64=0.0% 00:27:11.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 complete : 0=0.0%, 4=90.4%, 8=6.8%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.934 filename1: (groupid=0, jobs=1): err= 0: pid=89589: Thu Jul 11 06:13:26 2024 00:27:11.934 read: IOPS=190, BW=761KiB/s (779kB/s)(7664KiB/10069msec) 00:27:11.934 slat (usec): min=5, max=8030, avg=22.21, stdev=183.14 00:27:11.934 clat (msec): min=37, max=155, avg=83.86, stdev=20.61 00:27:11.934 lat (msec): min=37, max=155, avg=83.88, stdev=20.61 00:27:11.934 clat percentiles (msec): 00:27:11.934 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 62], 00:27:11.934 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 87], 00:27:11.934 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:27:11.934 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:27:11.934 | 99.99th=[ 157] 00:27:11.934 bw ( KiB/s): min= 640, max= 872, per=4.21%, avg=759.20, stdev=77.85, samples=20 00:27:11.934 iops : min= 160, max= 218, avg=189.75, stdev=19.48, samples=20 00:27:11.934 lat (msec) : 50=3.91%, 100=81.00%, 250=15.08% 00:27:11.934 cpu : usr=31.26%, sys=1.95%, ctx=946, majf=0, minf=1075 00:27:11.934 IO depths : 1=0.1%, 2=1.6%, 4=6.0%, 8=77.2%, 16=15.1%, 32=0.0%, >=64=0.0% 00:27:11.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 complete : 0=0.0%, 4=88.6%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.934 filename1: (groupid=0, jobs=1): err= 0: pid=89590: Thu Jul 11 06:13:26 2024 00:27:11.934 read: IOPS=197, BW=789KiB/s (808kB/s)(7932KiB/10054msec) 00:27:11.934 slat (usec): min=5, max=8036, avg=24.37, stdev=254.69 00:27:11.934 clat (msec): min=27, max=152, avg=80.83, stdev=19.74 00:27:11.934 lat (msec): min=27, max=152, avg=80.85, stdev=19.74 00:27:11.934 clat percentiles (msec): 00:27:11.934 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 61], 00:27:11.934 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 85], 00:27:11.934 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 111], 00:27:11.934 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:27:11.934 | 99.99th=[ 153] 00:27:11.934 bw ( KiB/s): min= 616, max= 872, per=4.37%, avg=788.35, stdev=57.20, samples=20 00:27:11.934 iops : min= 154, max= 218, avg=197.05, stdev=14.31, samples=20 00:27:11.934 lat (msec) : 50=6.45%, 100=82.75%, 250=10.79% 00:27:11.934 cpu : usr=31.70%, sys=1.59%, ctx=895, majf=0, minf=1075 00:27:11.934 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:27:11.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 issued rwts: total=1983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.934 filename1: (groupid=0, jobs=1): err= 0: pid=89591: Thu Jul 11 06:13:26 2024 00:27:11.934 read: IOPS=184, BW=738KiB/s (756kB/s)(7472KiB/10125msec) 00:27:11.934 slat (usec): min=5, max=8045, avg=30.00, stdev=321.27 00:27:11.934 clat (msec): min=9, max=166, avg=86.41, stdev=24.66 00:27:11.934 lat (msec): min=9, max=166, avg=86.44, stdev=24.66 00:27:11.934 clat percentiles (msec): 00:27:11.934 | 1.00th=[ 11], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 72], 00:27:11.934 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 93], 00:27:11.934 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 117], 95.00th=[ 130], 00:27:11.934 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 167], 00:27:11.934 | 99.99th=[ 167] 00:27:11.934 bw ( KiB/s): min= 616, max= 1264, per=4.11%, avg=741.50, stdev=138.01, samples=20 00:27:11.934 iops : min= 154, max= 316, avg=185.35, stdev=34.52, samples=20 00:27:11.934 lat (msec) : 10=0.86%, 20=2.46%, 50=2.03%, 100=74.95%, 250=19.70% 00:27:11.934 cpu : usr=31.38%, sys=1.66%, ctx=925, majf=0, minf=1073 00:27:11.934 IO depths : 1=0.2%, 2=2.2%, 4=8.5%, 8=73.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:11.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 complete : 0=0.0%, 4=89.9%, 8=8.2%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.934 filename1: (groupid=0, jobs=1): err= 0: pid=89592: Thu Jul 11 06:13:26 2024 00:27:11.934 read: IOPS=198, BW=793KiB/s (812kB/s)(7952KiB/10022msec) 00:27:11.934 slat (usec): min=4, max=8034, avg=25.87, stdev=254.27 00:27:11.934 clat (msec): min=23, max=173, avg=80.53, stdev=22.29 00:27:11.934 lat (msec): min=23, max=173, avg=80.55, stdev=22.30 00:27:11.934 clat percentiles (msec): 00:27:11.934 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 61], 00:27:11.934 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 85], 00:27:11.934 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 121], 00:27:11.934 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 174], 99.95th=[ 174], 00:27:11.934 | 99.99th=[ 174] 00:27:11.934 bw ( KiB/s): min= 584, max= 888, per=4.37%, avg=788.85, stdev=84.15, samples=20 00:27:11.934 iops : min= 146, max= 222, avg=197.20, stdev=21.04, samples=20 00:27:11.934 lat (msec) : 50=7.14%, 100=80.33%, 250=12.53% 00:27:11.934 cpu : usr=31.31%, sys=1.90%, ctx=998, majf=0, minf=1075 00:27:11.934 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:11.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.934 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.934 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.934 filename1: (groupid=0, jobs=1): err= 0: pid=89593: Thu Jul 11 06:13:26 2024 00:27:11.934 read: IOPS=196, BW=784KiB/s (803kB/s)(7884KiB/10053msec) 00:27:11.934 slat (usec): min=4, max=8037, avg=38.62, stdev=382.74 00:27:11.934 clat (msec): min=24, max=179, avg=81.31, stdev=21.98 00:27:11.935 lat (msec): min=24, max=179, avg=81.35, stdev=21.98 00:27:11.935 clat percentiles (msec): 00:27:11.935 | 1.00th=[ 36], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 61], 00:27:11.935 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 87], 00:27:11.935 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 120], 00:27:11.935 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:27:11.935 | 99.99th=[ 180] 00:27:11.935 bw ( KiB/s): min= 616, max= 872, per=4.35%, avg=783.85, stdev=74.99, samples=20 00:27:11.935 iops : min= 154, max= 218, avg=195.95, stdev=18.74, samples=20 00:27:11.935 lat (msec) : 50=5.02%, 100=81.63%, 250=13.34% 00:27:11.935 cpu : usr=36.70%, sys=2.26%, ctx=1015, majf=0, minf=1072 00:27:11.935 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:11.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 issued rwts: total=1971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.935 filename1: (groupid=0, jobs=1): err= 0: pid=89594: Thu Jul 11 06:13:26 2024 00:27:11.935 read: IOPS=197, BW=791KiB/s (810kB/s)(7936KiB/10035msec) 00:27:11.935 slat (usec): min=5, max=8093, avg=37.37, stdev=402.68 00:27:11.935 clat (msec): min=24, max=159, avg=80.67, stdev=21.78 00:27:11.935 lat (msec): min=24, max=159, avg=80.71, stdev=21.79 00:27:11.935 clat percentiles (msec): 00:27:11.935 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 61], 00:27:11.935 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 85], 00:27:11.935 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:27:11.935 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 159], 00:27:11.935 | 99.99th=[ 159] 00:27:11.935 bw ( KiB/s): min= 632, max= 888, per=4.37%, avg=788.30, stdev=77.64, samples=20 00:27:11.935 iops : min= 158, max= 222, avg=197.05, stdev=19.43, samples=20 00:27:11.935 lat (msec) : 50=6.05%, 100=81.75%, 250=12.20% 00:27:11.935 cpu : usr=31.67%, sys=1.89%, ctx=897, majf=0, minf=1073 00:27:11.935 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:11.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.935 filename2: (groupid=0, jobs=1): err= 0: pid=89595: Thu Jul 11 06:13:26 2024 00:27:11.935 read: IOPS=197, BW=790KiB/s (808kB/s)(7964KiB/10087msec) 00:27:11.935 slat (usec): min=5, max=7046, avg=26.16, stdev=213.56 00:27:11.935 clat (msec): min=8, max=153, avg=80.75, stdev=23.07 00:27:11.935 lat (msec): min=8, max=153, avg=80.77, stdev=23.07 00:27:11.935 clat percentiles (msec): 00:27:11.935 | 1.00th=[ 11], 5.00th=[ 44], 10.00th=[ 55], 20.00th=[ 61], 00:27:11.935 | 30.00th=[ 70], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 88], 00:27:11.935 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 116], 00:27:11.935 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:27:11.935 | 99.99th=[ 155] 00:27:11.935 bw ( KiB/s): min= 664, max= 1136, per=4.38%, avg=789.65, stdev=105.13, samples=20 00:27:11.935 iops : min= 166, max= 284, avg=197.40, stdev=26.29, samples=20 00:27:11.935 lat (msec) : 10=0.80%, 20=1.61%, 50=5.07%, 100=78.20%, 250=14.31% 00:27:11.935 cpu : usr=39.79%, sys=2.26%, ctx=1145, majf=0, minf=1075 00:27:11.935 IO depths : 1=0.2%, 2=0.6%, 4=2.0%, 8=81.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:27:11.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 issued rwts: total=1991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.935 filename2: (groupid=0, jobs=1): err= 0: pid=89596: Thu Jul 11 06:13:26 2024 00:27:11.935 read: IOPS=195, BW=784KiB/s (803kB/s)(7892KiB/10067msec) 00:27:11.935 slat (usec): min=5, max=4037, avg=25.93, stdev=180.81 00:27:11.935 clat (msec): min=30, max=152, avg=81.35, stdev=20.38 00:27:11.935 lat (msec): min=30, max=152, avg=81.38, stdev=20.38 00:27:11.935 clat percentiles (msec): 00:27:11.935 | 1.00th=[ 39], 5.00th=[ 51], 10.00th=[ 56], 20.00th=[ 63], 00:27:11.935 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 88], 00:27:11.935 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 116], 00:27:11.935 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:27:11.935 | 99.99th=[ 153] 00:27:11.935 bw ( KiB/s): min= 640, max= 872, per=4.35%, avg=783.05, stdev=69.02, samples=20 00:27:11.935 iops : min= 160, max= 218, avg=195.70, stdev=17.29, samples=20 00:27:11.935 lat (msec) : 50=5.07%, 100=80.74%, 250=14.19% 00:27:11.935 cpu : usr=42.62%, sys=2.36%, ctx=1394, majf=0, minf=1073 00:27:11.935 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:27:11.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 issued rwts: total=1973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.935 filename2: (groupid=0, jobs=1): err= 0: pid=89597: Thu Jul 11 06:13:26 2024 00:27:11.935 read: IOPS=186, BW=747KiB/s (765kB/s)(7484KiB/10021msec) 00:27:11.935 slat (usec): min=4, max=8033, avg=25.41, stdev=227.08 00:27:11.935 clat (msec): min=32, max=201, avg=85.56, stdev=21.39 00:27:11.935 lat (msec): min=32, max=201, avg=85.58, stdev=21.39 00:27:11.935 clat percentiles (msec): 00:27:11.935 | 1.00th=[ 48], 5.00th=[ 54], 10.00th=[ 59], 20.00th=[ 65], 00:27:11.935 | 30.00th=[ 78], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 91], 00:27:11.935 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 111], 95.00th=[ 122], 00:27:11.935 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 203], 99.95th=[ 203], 00:27:11.935 | 99.99th=[ 203] 00:27:11.935 bw ( KiB/s): min= 616, max= 872, per=4.12%, avg=742.10, stdev=86.30, samples=20 00:27:11.935 iops : min= 154, max= 218, avg=185.50, stdev=21.59, samples=20 00:27:11.935 lat (msec) : 50=3.05%, 100=79.58%, 250=17.37% 00:27:11.935 cpu : usr=38.77%, sys=2.50%, ctx=1151, majf=0, minf=1074 00:27:11.935 IO depths : 1=0.1%, 2=2.3%, 4=9.1%, 8=74.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:27:11.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 issued rwts: total=1871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.935 filename2: (groupid=0, jobs=1): err= 0: pid=89598: Thu Jul 11 06:13:26 2024 00:27:11.935 read: IOPS=184, BW=737KiB/s (755kB/s)(7380KiB/10016msec) 00:27:11.935 slat (usec): min=4, max=7036, avg=27.40, stdev=230.15 00:27:11.935 clat (msec): min=31, max=200, avg=86.68, stdev=21.21 00:27:11.935 lat (msec): min=31, max=200, avg=86.71, stdev=21.22 00:27:11.935 clat percentiles (msec): 00:27:11.935 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 66], 00:27:11.935 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 92], 00:27:11.935 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 111], 95.00th=[ 122], 00:27:11.935 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 194], 99.95th=[ 201], 00:27:11.935 | 99.99th=[ 201] 00:27:11.935 bw ( KiB/s): min= 524, max= 864, per=4.03%, avg=726.84, stdev=92.11, samples=19 00:27:11.935 iops : min= 131, max= 216, avg=181.68, stdev=23.06, samples=19 00:27:11.935 lat (msec) : 50=2.33%, 100=76.96%, 250=20.70% 00:27:11.935 cpu : usr=38.37%, sys=2.03%, ctx=1185, majf=0, minf=1073 00:27:11.935 IO depths : 1=0.1%, 2=2.6%, 4=10.4%, 8=72.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:27:11.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 complete : 0=0.0%, 4=89.7%, 8=8.0%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 issued rwts: total=1845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.935 filename2: (groupid=0, jobs=1): err= 0: pid=89599: Thu Jul 11 06:13:26 2024 00:27:11.935 read: IOPS=164, BW=659KiB/s (674kB/s)(6592KiB/10009msec) 00:27:11.935 slat (usec): min=4, max=8034, avg=22.13, stdev=203.76 00:27:11.935 clat (msec): min=33, max=180, avg=96.96, stdev=17.04 00:27:11.935 lat (msec): min=33, max=188, avg=96.98, stdev=17.07 00:27:11.935 clat percentiles (msec): 00:27:11.935 | 1.00th=[ 62], 5.00th=[ 75], 10.00th=[ 84], 20.00th=[ 85], 00:27:11.935 | 30.00th=[ 87], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:27:11.935 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 131], 00:27:11.935 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:27:11.935 | 99.99th=[ 180] 00:27:11.935 bw ( KiB/s): min= 512, max= 768, per=3.62%, avg=652.63, stdev=70.06, samples=19 00:27:11.935 iops : min= 128, max= 192, avg=163.16, stdev=17.52, samples=19 00:27:11.935 lat (msec) : 50=0.12%, 100=69.90%, 250=29.98% 00:27:11.935 cpu : usr=36.65%, sys=2.07%, ctx=1111, majf=0, minf=1075 00:27:11.935 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:27:11.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 issued rwts: total=1648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.935 filename2: (groupid=0, jobs=1): err= 0: pid=89600: Thu Jul 11 06:13:26 2024 00:27:11.935 read: IOPS=198, BW=794KiB/s (813kB/s)(8008KiB/10084msec) 00:27:11.935 slat (usec): min=5, max=8032, avg=31.63, stdev=338.28 00:27:11.935 clat (msec): min=5, max=158, avg=80.21, stdev=31.25 00:27:11.935 lat (msec): min=5, max=158, avg=80.24, stdev=31.25 00:27:11.935 clat percentiles (msec): 00:27:11.935 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 20], 20.00th=[ 61], 00:27:11.935 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 94], 00:27:11.935 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 120], 95.00th=[ 132], 00:27:11.935 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 157], 00:27:11.935 | 99.99th=[ 159] 00:27:11.935 bw ( KiB/s): min= 528, max= 2272, per=4.41%, avg=794.40, stdev=356.00, samples=20 00:27:11.935 iops : min= 132, max= 568, avg=198.60, stdev=89.00, samples=20 00:27:11.935 lat (msec) : 10=7.19%, 20=3.00%, 50=2.05%, 100=69.58%, 250=18.18% 00:27:11.935 cpu : usr=31.89%, sys=1.81%, ctx=913, majf=0, minf=1075 00:27:11.935 IO depths : 1=0.4%, 2=2.8%, 4=9.5%, 8=72.6%, 16=14.6%, 32=0.0%, >=64=0.0% 00:27:11.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 complete : 0=0.0%, 4=89.9%, 8=8.0%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.935 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.935 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.935 filename2: (groupid=0, jobs=1): err= 0: pid=89601: Thu Jul 11 06:13:26 2024 00:27:11.935 read: IOPS=170, BW=681KiB/s (697kB/s)(6828KiB/10026msec) 00:27:11.935 slat (usec): min=4, max=8037, avg=26.31, stdev=274.41 00:27:11.935 clat (msec): min=32, max=173, avg=93.78, stdev=19.59 00:27:11.935 lat (msec): min=32, max=173, avg=93.81, stdev=19.59 00:27:11.935 clat percentiles (msec): 00:27:11.935 | 1.00th=[ 56], 5.00th=[ 61], 10.00th=[ 72], 20.00th=[ 83], 00:27:11.935 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 96], 00:27:11.936 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:27:11.936 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 174], 99.95th=[ 174], 00:27:11.936 | 99.99th=[ 174] 00:27:11.936 bw ( KiB/s): min= 512, max= 792, per=3.75%, avg=676.05, stdev=70.45, samples=20 00:27:11.936 iops : min= 128, max= 198, avg=169.00, stdev=17.60, samples=20 00:27:11.936 lat (msec) : 50=0.53%, 100=72.06%, 250=27.42% 00:27:11.936 cpu : usr=31.52%, sys=1.60%, ctx=914, majf=0, minf=1073 00:27:11.936 IO depths : 1=0.1%, 2=4.9%, 4=19.4%, 8=62.4%, 16=13.2%, 32=0.0%, >=64=0.0% 00:27:11.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.936 complete : 0=0.0%, 4=92.7%, 8=3.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.936 issued rwts: total=1707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.936 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.936 filename2: (groupid=0, jobs=1): err= 0: pid=89602: Thu Jul 11 06:13:26 2024 00:27:11.936 read: IOPS=192, BW=771KiB/s (790kB/s)(7744KiB/10041msec) 00:27:11.936 slat (usec): min=4, max=8035, avg=26.23, stdev=242.10 00:27:11.936 clat (msec): min=26, max=188, avg=82.75, stdev=21.80 00:27:11.936 lat (msec): min=26, max=188, avg=82.78, stdev=21.80 00:27:11.936 clat percentiles (msec): 00:27:11.936 | 1.00th=[ 39], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 61], 00:27:11.936 | 30.00th=[ 71], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 86], 00:27:11.936 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:27:11.936 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 188], 00:27:11.936 | 99.99th=[ 188] 00:27:11.936 bw ( KiB/s): min= 592, max= 888, per=4.27%, avg=770.80, stdev=87.99, samples=20 00:27:11.936 iops : min= 148, max= 222, avg=192.70, stdev=22.00, samples=20 00:27:11.936 lat (msec) : 50=5.06%, 100=81.46%, 250=13.48% 00:27:11.936 cpu : usr=31.42%, sys=1.66%, ctx=915, majf=0, minf=1075 00:27:11.936 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=79.3%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:11.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.936 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.936 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.936 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:11.936 00:27:11.936 Run status group 0 (all jobs): 00:27:11.936 READ: bw=17.6MiB/s (18.4MB/s), 659KiB/s-854KiB/s (674kB/s-875kB/s), io=178MiB (187MB), run=10009-10125msec 00:27:11.936 ----------------------------------------------------- 00:27:11.936 Suppressions used: 00:27:11.936 count bytes template 00:27:11.936 45 402 /usr/src/fio/parse.c 00:27:11.936 1 8 libtcmalloc_minimal.so 00:27:11.936 1 904 libcrypto.so 00:27:11.936 ----------------------------------------------------- 00:27:11.936 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.936 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.196 bdev_null0 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.196 [2024-07-11 06:13:27.886729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.196 bdev_null1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.196 { 00:27:12.196 "params": { 00:27:12.196 "name": "Nvme$subsystem", 00:27:12.196 "trtype": "$TEST_TRANSPORT", 00:27:12.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.196 "adrfam": "ipv4", 00:27:12.196 "trsvcid": "$NVMF_PORT", 00:27:12.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.196 "hdgst": ${hdgst:-false}, 00:27:12.196 "ddgst": ${ddgst:-false} 00:27:12.196 }, 00:27:12.196 "method": "bdev_nvme_attach_controller" 00:27:12.196 } 00:27:12.196 EOF 00:27:12.196 )") 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:12.196 { 00:27:12.196 "params": { 00:27:12.196 "name": "Nvme$subsystem", 00:27:12.196 "trtype": "$TEST_TRANSPORT", 00:27:12.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.196 "adrfam": "ipv4", 00:27:12.196 "trsvcid": "$NVMF_PORT", 00:27:12.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.196 "hdgst": ${hdgst:-false}, 00:27:12.196 "ddgst": ${ddgst:-false} 00:27:12.196 }, 00:27:12.196 "method": "bdev_nvme_attach_controller" 00:27:12.196 } 00:27:12.196 EOF 00:27:12.196 )") 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:12.196 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:12.197 06:13:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:12.197 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:12.197 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:12.197 06:13:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:12.197 "params": { 00:27:12.197 "name": "Nvme0", 00:27:12.197 "trtype": "tcp", 00:27:12.197 "traddr": "10.0.0.2", 00:27:12.197 "adrfam": "ipv4", 00:27:12.197 "trsvcid": "4420", 00:27:12.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:12.197 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:12.197 "hdgst": false, 00:27:12.197 "ddgst": false 00:27:12.197 }, 00:27:12.197 "method": "bdev_nvme_attach_controller" 00:27:12.197 },{ 00:27:12.197 "params": { 00:27:12.197 "name": "Nvme1", 00:27:12.197 "trtype": "tcp", 00:27:12.197 "traddr": "10.0.0.2", 00:27:12.197 "adrfam": "ipv4", 00:27:12.197 "trsvcid": "4420", 00:27:12.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:12.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:12.197 "hdgst": false, 00:27:12.197 "ddgst": false 00:27:12.197 }, 00:27:12.197 "method": "bdev_nvme_attach_controller" 00:27:12.197 }' 00:27:12.197 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:12.197 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:12.197 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:27:12.197 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:12.197 06:13:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.456 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:12.456 ... 00:27:12.456 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:12.456 ... 00:27:12.456 fio-3.35 00:27:12.456 Starting 4 threads 00:27:19.019 00:27:19.019 filename0: (groupid=0, jobs=1): err= 0: pid=89737: Thu Jul 11 06:13:34 2024 00:27:19.019 read: IOPS=1996, BW=15.6MiB/s (16.4MB/s)(78.0MiB/5001msec) 00:27:19.019 slat (nsec): min=5375, max=86467, avg=13777.69, stdev=4926.02 00:27:19.019 clat (usec): min=758, max=8966, avg=3969.76, stdev=1318.58 00:27:19.019 lat (usec): min=783, max=8994, avg=3983.53, stdev=1318.34 00:27:19.019 clat percentiles (usec): 00:27:19.019 | 1.00th=[ 1647], 5.00th=[ 1729], 10.00th=[ 1762], 20.00th=[ 1893], 00:27:19.019 | 30.00th=[ 3589], 40.00th=[ 3851], 50.00th=[ 4686], 60.00th=[ 4817], 00:27:19.019 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:27:19.019 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 6063], 99.95th=[ 8586], 00:27:19.019 | 99.99th=[ 8979] 00:27:19.019 bw ( KiB/s): min=15008, max=17120, per=30.94%, avg=16341.33, stdev=732.21, samples=9 00:27:19.019 iops : min= 1876, max= 2140, avg=2042.67, stdev=91.53, samples=9 00:27:19.019 lat (usec) : 1000=0.03% 00:27:19.019 lat (msec) : 2=21.33%, 4=20.17%, 10=58.46% 00:27:19.019 cpu : usr=91.48%, sys=7.52%, ctx=6, majf=0, minf=1074 00:27:19.019 IO depths : 1=0.1%, 2=3.1%, 4=62.0%, 8=34.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:19.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.019 complete : 0=0.0%, 4=98.8%, 8=1.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.019 issued rwts: total=9984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.019 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:19.019 filename0: (groupid=0, jobs=1): err= 0: pid=89738: Thu Jul 11 06:13:34 2024 00:27:19.019 read: IOPS=1519, BW=11.9MiB/s (12.4MB/s)(59.4MiB/5002msec) 00:27:19.020 slat (nsec): min=5304, max=76798, avg=18465.82, stdev=5161.74 00:27:19.020 clat (usec): min=2737, max=6694, avg=5191.59, stdev=274.56 00:27:19.020 lat (usec): min=2753, max=6716, avg=5210.05, stdev=274.82 00:27:19.020 clat percentiles (usec): 00:27:19.020 | 1.00th=[ 4686], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5014], 00:27:19.020 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5211], 00:27:19.020 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5604], 00:27:19.020 | 99.00th=[ 5932], 99.50th=[ 5997], 99.90th=[ 6456], 99.95th=[ 6521], 00:27:19.020 | 99.99th=[ 6718] 00:27:19.020 bw ( KiB/s): min=11648, max=12416, per=22.89%, avg=12088.89, stdev=222.73, samples=9 00:27:19.020 iops : min= 1456, max= 1552, avg=1511.11, stdev=27.84, samples=9 00:27:19.020 lat (msec) : 4=0.21%, 10=99.79% 00:27:19.020 cpu : usr=92.18%, sys=6.96%, ctx=22, majf=0, minf=1073 00:27:19.020 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:19.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.020 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.020 issued rwts: total=7600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.020 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:19.020 filename1: (groupid=0, jobs=1): err= 0: pid=89739: Thu Jul 11 06:13:34 2024 00:27:19.020 read: IOPS=1566, BW=12.2MiB/s (12.8MB/s)(61.2MiB/5001msec) 00:27:19.020 slat (nsec): min=4263, max=73215, avg=18035.36, stdev=5490.39 00:27:19.020 clat (usec): min=1183, max=8580, avg=5037.48, stdev=572.44 00:27:19.020 lat (usec): min=1198, max=8599, avg=5055.52, stdev=572.56 00:27:19.020 clat percentiles (usec): 00:27:19.020 | 1.00th=[ 2704], 5.00th=[ 3359], 10.00th=[ 4752], 20.00th=[ 4948], 00:27:19.020 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5145], 60.00th=[ 5211], 00:27:19.020 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5407], 95.00th=[ 5473], 00:27:19.020 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 7898], 99.95th=[ 7898], 00:27:19.020 | 99.99th=[ 8586] 00:27:19.020 bw ( KiB/s): min=11904, max=15008, per=23.71%, avg=12519.11, stdev=961.07, samples=9 00:27:19.020 iops : min= 1488, max= 1876, avg=1564.89, stdev=120.13, samples=9 00:27:19.020 lat (msec) : 2=0.03%, 4=6.04%, 10=93.94% 00:27:19.020 cpu : usr=91.76%, sys=7.38%, ctx=8, majf=0, minf=1074 00:27:19.020 IO depths : 1=0.1%, 2=22.2%, 4=51.6%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:19.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.020 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.020 issued rwts: total=7834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.020 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:19.020 filename1: (groupid=0, jobs=1): err= 0: pid=89740: Thu Jul 11 06:13:34 2024 00:27:19.020 read: IOPS=1519, BW=11.9MiB/s (12.4MB/s)(59.4MiB/5001msec) 00:27:19.020 slat (usec): min=5, max=288, avg=18.66, stdev= 6.30 00:27:19.020 clat (usec): min=2727, max=6585, avg=5189.20, stdev=271.81 00:27:19.020 lat (usec): min=2744, max=6603, avg=5207.86, stdev=272.18 00:27:19.020 clat percentiles (usec): 00:27:19.020 | 1.00th=[ 4686], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5014], 00:27:19.020 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5145], 60.00th=[ 5211], 00:27:19.020 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5604], 00:27:19.020 | 99.00th=[ 5932], 99.50th=[ 5997], 99.90th=[ 6456], 99.95th=[ 6521], 00:27:19.020 | 99.99th=[ 6587] 00:27:19.020 bw ( KiB/s): min=11632, max=12416, per=22.89%, avg=12089.78, stdev=229.50, samples=9 00:27:19.020 iops : min= 1454, max= 1552, avg=1511.22, stdev=28.69, samples=9 00:27:19.020 lat (msec) : 4=0.21%, 10=99.79% 00:27:19.020 cpu : usr=92.28%, sys=6.90%, ctx=113, majf=0, minf=1074 00:27:19.020 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:19.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.020 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.020 issued rwts: total=7600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.020 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:19.020 00:27:19.020 Run status group 0 (all jobs): 00:27:19.020 READ: bw=51.6MiB/s (54.1MB/s), 11.9MiB/s-15.6MiB/s (12.4MB/s-16.4MB/s), io=258MiB (270MB), run=5001-5002msec 00:27:19.588 ----------------------------------------------------- 00:27:19.588 Suppressions used: 00:27:19.588 count bytes template 00:27:19.588 6 52 /usr/src/fio/parse.c 00:27:19.588 1 8 libtcmalloc_minimal.so 00:27:19.588 1 904 libcrypto.so 00:27:19.588 ----------------------------------------------------- 00:27:19.588 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.588 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.589 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:19.589 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.589 06:13:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:19.589 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.589 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:19.589 ************************************ 00:27:19.589 END TEST fio_dif_rand_params 00:27:19.589 ************************************ 00:27:19.589 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.589 00:27:19.589 real 0m27.667s 00:27:19.589 user 2m7.582s 00:27:19.589 sys 0m8.742s 00:27:19.589 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:19.589 06:13:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:19.849 06:13:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:19.849 06:13:35 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:19.849 06:13:35 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:19.849 06:13:35 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.849 06:13:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:19.849 ************************************ 00:27:19.849 START TEST fio_dif_digest 00:27:19.849 ************************************ 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.849 bdev_null0 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.849 [2024-07-11 06:13:35.564010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.849 { 00:27:19.849 "params": { 00:27:19.849 "name": "Nvme$subsystem", 00:27:19.849 "trtype": "$TEST_TRANSPORT", 00:27:19.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.849 "adrfam": "ipv4", 00:27:19.849 "trsvcid": "$NVMF_PORT", 00:27:19.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.849 "hdgst": ${hdgst:-false}, 00:27:19.849 "ddgst": ${ddgst:-false} 00:27:19.849 }, 00:27:19.849 "method": "bdev_nvme_attach_controller" 00:27:19.849 } 00:27:19.849 EOF 00:27:19.849 )") 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:19.849 "params": { 00:27:19.849 "name": "Nvme0", 00:27:19.849 "trtype": "tcp", 00:27:19.849 "traddr": "10.0.0.2", 00:27:19.849 "adrfam": "ipv4", 00:27:19.849 "trsvcid": "4420", 00:27:19.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:19.849 "hdgst": true, 00:27:19.849 "ddgst": true 00:27:19.849 }, 00:27:19.849 "method": "bdev_nvme_attach_controller" 00:27:19.849 }' 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:19.849 06:13:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.108 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:20.108 ... 00:27:20.108 fio-3.35 00:27:20.108 Starting 3 threads 00:27:32.374 00:27:32.374 filename0: (groupid=0, jobs=1): err= 0: pid=89856: Thu Jul 11 06:13:46 2024 00:27:32.374 read: IOPS=173, BW=21.7MiB/s (22.7MB/s)(217MiB/10014msec) 00:27:32.374 slat (nsec): min=9157, max=57263, avg=15627.92, stdev=7531.55 00:27:32.374 clat (usec): min=16031, max=21896, avg=17255.77, stdev=687.01 00:27:32.374 lat (usec): min=16041, max=21934, avg=17271.39, stdev=687.75 00:27:32.374 clat percentiles (usec): 00:27:32.374 | 1.00th=[16057], 5.00th=[16319], 10.00th=[16450], 20.00th=[16581], 00:27:32.374 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:27:32.374 | 70.00th=[17695], 80.00th=[17695], 90.00th=[18220], 95.00th=[18482], 00:27:32.374 | 99.00th=[18744], 99.50th=[19006], 99.90th=[21890], 99.95th=[21890], 00:27:32.374 | 99.99th=[21890] 00:27:32.374 bw ( KiB/s): min=21504, max=23040, per=33.32%, avg=22195.20, stdev=343.46, samples=20 00:27:32.374 iops : min= 168, max= 180, avg=173.40, stdev= 2.68, samples=20 00:27:32.374 lat (msec) : 20=99.83%, 50=0.17% 00:27:32.374 cpu : usr=91.76%, sys=7.61%, ctx=13, majf=0, minf=1074 00:27:32.374 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.374 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:32.374 filename0: (groupid=0, jobs=1): err= 0: pid=89857: Thu Jul 11 06:13:46 2024 00:27:32.374 read: IOPS=173, BW=21.7MiB/s (22.7MB/s)(217MiB/10012msec) 00:27:32.374 slat (nsec): min=5289, max=74816, avg=20001.86, stdev=6826.96 00:27:32.374 clat (usec): min=16013, max=22211, avg=17245.24, stdev=690.81 00:27:32.374 lat (usec): min=16029, max=22231, avg=17265.25, stdev=691.54 00:27:32.374 clat percentiles (usec): 00:27:32.374 | 1.00th=[16057], 5.00th=[16319], 10.00th=[16450], 20.00th=[16581], 00:27:32.374 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:27:32.374 | 70.00th=[17695], 80.00th=[17695], 90.00th=[18220], 95.00th=[18482], 00:27:32.374 | 99.00th=[19006], 99.50th=[19006], 99.90th=[22152], 99.95th=[22152], 00:27:32.374 | 99.99th=[22152] 00:27:32.374 bw ( KiB/s): min=21504, max=23040, per=33.32%, avg=22195.20, stdev=343.46, samples=20 00:27:32.374 iops : min= 168, max= 180, avg=173.40, stdev= 2.68, samples=20 00:27:32.374 lat (msec) : 20=99.83%, 50=0.17% 00:27:32.374 cpu : usr=91.49%, sys=7.86%, ctx=9, majf=0, minf=1075 00:27:32.374 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.374 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:32.374 filename0: (groupid=0, jobs=1): err= 0: pid=89858: Thu Jul 11 06:13:46 2024 00:27:32.374 read: IOPS=173, BW=21.7MiB/s (22.7MB/s)(217MiB/10012msec) 00:27:32.374 slat (nsec): min=5489, max=66693, avg=20291.07, stdev=7048.75 00:27:32.374 clat (usec): min=16018, max=21791, avg=17243.90, stdev=685.63 00:27:32.374 lat (usec): min=16034, max=21836, avg=17264.19, stdev=686.60 00:27:32.374 clat percentiles (usec): 00:27:32.374 | 1.00th=[16057], 5.00th=[16319], 10.00th=[16450], 20.00th=[16712], 00:27:32.374 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:27:32.374 | 70.00th=[17695], 80.00th=[17695], 90.00th=[18220], 95.00th=[18482], 00:27:32.374 | 99.00th=[19006], 99.50th=[19006], 99.90th=[21890], 99.95th=[21890], 00:27:32.374 | 99.99th=[21890] 00:27:32.374 bw ( KiB/s): min=21504, max=23040, per=33.33%, avg=22197.35, stdev=339.01, samples=20 00:27:32.374 iops : min= 168, max= 180, avg=173.40, stdev= 2.68, samples=20 00:27:32.374 lat (msec) : 20=99.83%, 50=0.17% 00:27:32.374 cpu : usr=91.10%, sys=8.25%, ctx=112, majf=0, minf=1072 00:27:32.374 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.374 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:32.374 00:27:32.374 Run status group 0 (all jobs): 00:27:32.374 READ: bw=65.0MiB/s (68.2MB/s), 21.7MiB/s-21.7MiB/s (22.7MB/s-22.7MB/s), io=651MiB (683MB), run=10012-10014msec 00:27:32.374 ----------------------------------------------------- 00:27:32.374 Suppressions used: 00:27:32.374 count bytes template 00:27:32.374 5 44 /usr/src/fio/parse.c 00:27:32.374 1 8 libtcmalloc_minimal.so 00:27:32.374 1 904 libcrypto.so 00:27:32.374 ----------------------------------------------------- 00:27:32.374 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:32.374 ************************************ 00:27:32.374 END TEST fio_dif_digest 00:27:32.374 ************************************ 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.374 00:27:32.374 real 0m12.521s 00:27:32.374 user 0m29.523s 00:27:32.374 sys 0m2.755s 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.374 06:13:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:32.374 06:13:48 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:32.374 06:13:48 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.374 rmmod nvme_tcp 00:27:32.374 rmmod nvme_fabrics 00:27:32.374 rmmod nvme_keyring 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 89095 ']' 00:27:32.374 06:13:48 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 89095 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 89095 ']' 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 89095 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89095 00:27:32.374 killing process with pid 89095 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89095' 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@967 -- # kill 89095 00:27:32.374 06:13:48 nvmf_dif -- common/autotest_common.sh@972 -- # wait 89095 00:27:33.751 06:13:49 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:33.751 06:13:49 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:34.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:34.009 Waiting for block devices as requested 00:27:34.268 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:34.268 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:34.268 06:13:50 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:34.268 06:13:50 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:34.268 06:13:50 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.268 06:13:50 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:34.268 06:13:50 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.268 06:13:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:34.268 06:13:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.268 06:13:50 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:34.268 00:27:34.268 real 1m9.317s 00:27:34.268 user 4m5.904s 00:27:34.268 sys 0m19.952s 00:27:34.268 06:13:50 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.268 ************************************ 00:27:34.268 END TEST nvmf_dif 00:27:34.268 06:13:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:34.268 ************************************ 00:27:34.527 06:13:50 -- common/autotest_common.sh@1142 -- # return 0 00:27:34.527 06:13:50 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:34.527 06:13:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:34.527 06:13:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.527 06:13:50 -- common/autotest_common.sh@10 -- # set +x 00:27:34.527 ************************************ 00:27:34.527 START TEST nvmf_abort_qd_sizes 00:27:34.527 ************************************ 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:34.527 * Looking for test storage... 00:27:34.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:34.527 Cannot find device "nvmf_tgt_br" 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:34.527 Cannot find device "nvmf_tgt_br2" 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:34.527 Cannot find device "nvmf_tgt_br" 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:34.527 Cannot find device "nvmf_tgt_br2" 00:27:34.527 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:27:34.528 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:34.528 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:34.786 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:34.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:34.786 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:27:34.786 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:34.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:34.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:27:34.787 00:27:34.787 --- 10.0.0.2 ping statistics --- 00:27:34.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.787 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:34.787 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:34.787 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:27:34.787 00:27:34.787 --- 10.0.0.3 ping statistics --- 00:27:34.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.787 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:34.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:27:34.787 00:27:34.787 --- 10.0.0.1 ping statistics --- 00:27:34.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.787 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:34.787 06:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:35.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:35.724 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:35.724 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=90469 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 90469 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 90469 ']' 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:35.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:35.724 06:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:35.983 [2024-07-11 06:13:51.718732] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:27:35.983 [2024-07-11 06:13:51.718951] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.983 [2024-07-11 06:13:51.903575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.242 [2024-07-11 06:13:52.160879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.243 [2024-07-11 06:13:52.160971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.243 [2024-07-11 06:13:52.160997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.243 [2024-07-11 06:13:52.161015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.243 [2024-07-11 06:13:52.161028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.243 [2024-07-11 06:13:52.161933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.243 [2024-07-11 06:13:52.162047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.243 [2024-07-11 06:13:52.162187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.243 [2024-07-11 06:13:52.162258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.502 [2024-07-11 06:13:52.382636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:36.760 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:36.760 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:36.760 06:13:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.760 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.760 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.020 06:13:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:37.020 ************************************ 00:27:37.020 START TEST spdk_target_abort 00:27:37.020 ************************************ 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:37.020 spdk_targetn1 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:37.020 [2024-07-11 06:13:52.843899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:37.020 [2024-07-11 06:13:52.880306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:37.020 06:13:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:40.305 Initializing NVMe Controllers 00:27:40.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:40.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:40.305 Initialization complete. Launching workers. 00:27:40.305 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8044, failed: 0 00:27:40.305 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1078, failed to submit 6966 00:27:40.305 success 799, unsuccess 279, failed 0 00:27:40.563 06:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:40.563 06:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:43.860 Initializing NVMe Controllers 00:27:43.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:43.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:43.860 Initialization complete. Launching workers. 00:27:43.860 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8880, failed: 0 00:27:43.860 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1183, failed to submit 7697 00:27:43.860 success 368, unsuccess 815, failed 0 00:27:43.860 06:13:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:43.860 06:13:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:47.147 Initializing NVMe Controllers 00:27:47.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:47.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:47.147 Initialization complete. Launching workers. 00:27:47.147 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26507, failed: 0 00:27:47.147 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2304, failed to submit 24203 00:27:47.147 success 379, unsuccess 1925, failed 0 00:27:47.147 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:47.147 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.147 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.147 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.147 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:47.405 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.405 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.405 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.405 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 90469 00:27:47.405 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 90469 ']' 00:27:47.405 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 90469 00:27:47.405 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:47.405 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.405 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90469 00:27:47.664 killing process with pid 90469 00:27:47.664 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:47.664 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:47.664 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90469' 00:27:47.664 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 90469 00:27:47.664 06:14:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 90469 00:27:48.598 00:27:48.598 real 0m11.731s 00:27:48.598 user 0m45.184s 00:27:48.598 sys 0m2.416s 00:27:48.598 06:14:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:48.598 ************************************ 00:27:48.598 END TEST spdk_target_abort 00:27:48.598 ************************************ 00:27:48.598 06:14:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:48.598 06:14:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:48.598 06:14:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:48.598 06:14:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:48.598 06:14:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.598 06:14:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:48.598 ************************************ 00:27:48.598 START TEST kernel_target_abort 00:27:48.598 ************************************ 00:27:48.598 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:48.856 06:14:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:49.114 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:49.114 Waiting for block devices as requested 00:27:49.114 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:49.372 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:49.631 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:49.631 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:49.631 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:49.631 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:49.631 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:49.631 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:49.631 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:49.631 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:49.632 No valid GPT data, bailing 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:49.632 No valid GPT data, bailing 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:27:49.632 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:49.891 No valid GPT data, bailing 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:49.891 No valid GPT data, bailing 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:49.891 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 --hostid=8738190a-dd44-4449-9019-403e2a10a368 -a 10.0.0.1 -t tcp -s 4420 00:27:49.891 00:27:49.891 Discovery Log Number of Records 2, Generation counter 2 00:27:49.891 =====Discovery Log Entry 0====== 00:27:49.891 trtype: tcp 00:27:49.891 adrfam: ipv4 00:27:49.891 subtype: current discovery subsystem 00:27:49.891 treq: not specified, sq flow control disable supported 00:27:49.891 portid: 1 00:27:49.891 trsvcid: 4420 00:27:49.891 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:49.891 traddr: 10.0.0.1 00:27:49.891 eflags: none 00:27:49.891 sectype: none 00:27:49.891 =====Discovery Log Entry 1====== 00:27:49.891 trtype: tcp 00:27:49.891 adrfam: ipv4 00:27:49.891 subtype: nvme subsystem 00:27:49.891 treq: not specified, sq flow control disable supported 00:27:49.892 portid: 1 00:27:49.892 trsvcid: 4420 00:27:49.892 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:49.892 traddr: 10.0.0.1 00:27:49.892 eflags: none 00:27:49.892 sectype: none 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:49.892 06:14:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:53.171 Initializing NVMe Controllers 00:27:53.171 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:53.171 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:53.171 Initialization complete. Launching workers. 00:27:53.171 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24340, failed: 0 00:27:53.171 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24340, failed to submit 0 00:27:53.171 success 0, unsuccess 24340, failed 0 00:27:53.171 06:14:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:53.171 06:14:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:56.454 Initializing NVMe Controllers 00:27:56.454 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:56.454 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:56.454 Initialization complete. Launching workers. 00:27:56.454 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53420, failed: 0 00:27:56.454 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22116, failed to submit 31304 00:27:56.454 success 0, unsuccess 22116, failed 0 00:27:56.454 06:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:56.454 06:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:59.749 Initializing NVMe Controllers 00:27:59.749 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:59.750 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:59.750 Initialization complete. Launching workers. 00:27:59.750 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61970, failed: 0 00:27:59.750 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15454, failed to submit 46516 00:27:59.750 success 0, unsuccess 15454, failed 0 00:27:59.750 06:14:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:59.750 06:14:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:59.750 06:14:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:59.750 06:14:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:59.750 06:14:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:59.750 06:14:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:59.750 06:14:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:59.750 06:14:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:59.750 06:14:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:59.750 06:14:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:00.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:01.255 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:01.255 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:01.255 ************************************ 00:28:01.255 END TEST kernel_target_abort 00:28:01.255 ************************************ 00:28:01.255 00:28:01.255 real 0m12.643s 00:28:01.255 user 0m6.540s 00:28:01.255 sys 0m3.784s 00:28:01.255 06:14:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:01.255 06:14:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.514 rmmod nvme_tcp 00:28:01.514 rmmod nvme_fabrics 00:28:01.514 rmmod nvme_keyring 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 90469 ']' 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 90469 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 90469 ']' 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 90469 00:28:01.514 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (90469) - No such process 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 90469 is not found' 00:28:01.514 Process with pid 90469 is not found 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:01.514 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:01.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:01.773 Waiting for block devices as requested 00:28:02.032 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:02.032 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:02.032 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:02.032 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:02.032 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.032 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:02.032 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.032 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:02.032 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.032 06:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:02.032 00:28:02.032 real 0m27.710s 00:28:02.032 user 0m52.962s 00:28:02.032 sys 0m7.540s 00:28:02.032 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.032 06:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:02.032 ************************************ 00:28:02.032 END TEST nvmf_abort_qd_sizes 00:28:02.032 ************************************ 00:28:02.305 06:14:17 -- common/autotest_common.sh@1142 -- # return 0 00:28:02.305 06:14:17 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:02.305 06:14:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:02.305 06:14:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.305 06:14:17 -- common/autotest_common.sh@10 -- # set +x 00:28:02.305 ************************************ 00:28:02.305 START TEST keyring_file 00:28:02.305 ************************************ 00:28:02.305 06:14:17 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:02.305 * Looking for test storage... 00:28:02.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:02.305 06:14:18 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:02.305 06:14:18 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.305 06:14:18 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:02.305 06:14:18 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.305 06:14:18 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.305 06:14:18 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.305 06:14:18 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.306 06:14:18 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.306 06:14:18 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.306 06:14:18 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:02.306 06:14:18 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZRq72LANX0 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZRq72LANX0 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZRq72LANX0 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ZRq72LANX0 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pirCG1EwYs 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:02.306 06:14:18 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pirCG1EwYs 00:28:02.306 06:14:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pirCG1EwYs 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.pirCG1EwYs 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@30 -- # tgtpid=91444 00:28:02.306 06:14:18 keyring_file -- keyring/file.sh@32 -- # waitforlisten 91444 00:28:02.306 06:14:18 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 91444 ']' 00:28:02.306 06:14:18 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.306 06:14:18 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:02.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.306 06:14:18 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.306 06:14:18 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:02.306 06:14:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:02.581 [2024-07-11 06:14:18.309309] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:28:02.581 [2024-07-11 06:14:18.310199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91444 ] 00:28:02.581 [2024-07-11 06:14:18.486444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.840 [2024-07-11 06:14:18.693196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.099 [2024-07-11 06:14:18.851928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:03.666 06:14:19 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.666 06:14:19 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:03.667 06:14:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:03.667 [2024-07-11 06:14:19.329812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.667 null0 00:28:03.667 [2024-07-11 06:14:19.361775] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:03.667 [2024-07-11 06:14:19.362105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:03.667 [2024-07-11 06:14:19.369755] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.667 06:14:19 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:03.667 [2024-07-11 06:14:19.381811] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:03.667 request: 00:28:03.667 { 00:28:03.667 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:03.667 "secure_channel": false, 00:28:03.667 "listen_address": { 00:28:03.667 "trtype": "tcp", 00:28:03.667 "traddr": "127.0.0.1", 00:28:03.667 "trsvcid": "4420" 00:28:03.667 }, 00:28:03.667 "method": "nvmf_subsystem_add_listener", 00:28:03.667 "req_id": 1 00:28:03.667 } 00:28:03.667 Got JSON-RPC error response 00:28:03.667 response: 00:28:03.667 { 00:28:03.667 "code": -32602, 00:28:03.667 "message": "Invalid parameters" 00:28:03.667 } 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.667 06:14:19 keyring_file -- keyring/file.sh@46 -- # bperfpid=91461 00:28:03.667 06:14:19 keyring_file -- keyring/file.sh@48 -- # waitforlisten 91461 /var/tmp/bperf.sock 00:28:03.667 06:14:19 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 91461 ']' 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:03.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.667 06:14:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:03.667 [2024-07-11 06:14:19.496923] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:28:03.667 [2024-07-11 06:14:19.497120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91461 ] 00:28:03.926 [2024-07-11 06:14:19.663616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.926 [2024-07-11 06:14:19.834010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.184 [2024-07-11 06:14:20.005285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:04.753 06:14:20 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.753 06:14:20 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:04.753 06:14:20 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZRq72LANX0 00:28:04.753 06:14:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZRq72LANX0 00:28:05.012 06:14:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pirCG1EwYs 00:28:05.012 06:14:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pirCG1EwYs 00:28:05.272 06:14:20 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:05.272 06:14:20 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:05.272 06:14:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:05.272 06:14:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:05.272 06:14:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:05.530 06:14:21 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ZRq72LANX0 == \/\t\m\p\/\t\m\p\.\Z\R\q\7\2\L\A\N\X\0 ]] 00:28:05.530 06:14:21 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:05.530 06:14:21 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:05.530 06:14:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:05.530 06:14:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:05.530 06:14:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:05.789 06:14:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.pirCG1EwYs == \/\t\m\p\/\t\m\p\.\p\i\r\C\G\1\E\w\Y\s ]] 00:28:05.789 06:14:21 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:05.789 06:14:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:05.789 06:14:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:05.789 06:14:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:05.789 06:14:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:05.789 06:14:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:06.048 06:14:21 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:06.048 06:14:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:06.048 06:14:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:06.048 06:14:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:06.048 06:14:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:06.048 06:14:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:06.048 06:14:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:06.306 06:14:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:06.306 06:14:22 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:06.306 06:14:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:06.565 [2024-07-11 06:14:22.296325] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:06.565 nvme0n1 00:28:06.565 06:14:22 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:06.565 06:14:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:06.565 06:14:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:06.565 06:14:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:06.565 06:14:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:06.565 06:14:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:06.823 06:14:22 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:06.823 06:14:22 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:06.823 06:14:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:06.823 06:14:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:06.823 06:14:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:06.823 06:14:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:06.823 06:14:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:07.082 06:14:22 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:07.082 06:14:22 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.341 Running I/O for 1 seconds... 00:28:08.276 00:28:08.276 Latency(us) 00:28:08.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.276 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:08.276 nvme0n1 : 1.01 8686.78 33.93 0.00 0.00 14666.91 6762.12 22997.18 00:28:08.276 =================================================================================================================== 00:28:08.276 Total : 8686.78 33.93 0.00 0.00 14666.91 6762.12 22997.18 00:28:08.276 0 00:28:08.276 06:14:24 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:08.276 06:14:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:08.534 06:14:24 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:08.534 06:14:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:08.534 06:14:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:08.534 06:14:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.534 06:14:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.534 06:14:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:08.792 06:14:24 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:08.792 06:14:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:08.792 06:14:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:08.792 06:14:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:08.792 06:14:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.792 06:14:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.792 06:14:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:09.050 06:14:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:09.050 06:14:24 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:09.050 06:14:24 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:09.050 06:14:24 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:09.050 06:14:24 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:09.050 06:14:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.050 06:14:24 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:09.050 06:14:24 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.050 06:14:24 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:09.050 06:14:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:09.309 [2024-07-11 06:14:25.110900] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:09.309 [2024-07-11 06:14:25.111760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (107): Transport endpoint is not connected 00:28:09.309 [2024-07-11 06:14:25.112751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (9): Bad file descriptor 00:28:09.309 [2024-07-11 06:14:25.113728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:09.309 request: 00:28:09.309 { 00:28:09.309 "name": "nvme0", 00:28:09.309 "trtype": "tcp", 00:28:09.309 "traddr": "127.0.0.1", 00:28:09.309 "adrfam": "ipv4", 00:28:09.309 "trsvcid": "4420", 00:28:09.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:09.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:09.309 "prchk_reftag": false, 00:28:09.309 "prchk_guard": false, 00:28:09.309 "hdgst": false, 00:28:09.309 "ddgst": false, 00:28:09.309 "psk": "key1", 00:28:09.309 "method": "bdev_nvme_attach_controller", 00:28:09.309 "req_id": 1 00:28:09.309 } 00:28:09.309 Got JSON-RPC error response 00:28:09.309 response: 00:28:09.309 { 00:28:09.309 "code": -5, 00:28:09.309 "message": "Input/output error" 00:28:09.309 } 00:28:09.309 [2024-07-11 06:14:25.113956] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:09.309 [2024-07-11 06:14:25.114003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:09.309 06:14:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:09.309 06:14:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:09.309 06:14:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:09.309 06:14:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:09.309 06:14:25 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:09.309 06:14:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:09.309 06:14:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:09.309 06:14:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:09.309 06:14:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:09.309 06:14:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.568 06:14:25 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:09.568 06:14:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:09.568 06:14:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:09.568 06:14:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:09.568 06:14:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:09.568 06:14:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.568 06:14:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:09.826 06:14:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:09.826 06:14:25 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:09.826 06:14:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:10.085 06:14:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:10.085 06:14:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:10.343 06:14:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:10.343 06:14:26 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:10.343 06:14:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:10.602 06:14:26 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:10.602 06:14:26 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ZRq72LANX0 00:28:10.602 06:14:26 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZRq72LANX0 00:28:10.602 06:14:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:10.602 06:14:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZRq72LANX0 00:28:10.602 06:14:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:10.602 06:14:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.602 06:14:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:10.602 06:14:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.602 06:14:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZRq72LANX0 00:28:10.602 06:14:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZRq72LANX0 00:28:10.861 [2024-07-11 06:14:26.618636] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZRq72LANX0': 0100660 00:28:10.861 [2024-07-11 06:14:26.618715] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:10.861 request: 00:28:10.861 { 00:28:10.861 "name": "key0", 00:28:10.861 "path": "/tmp/tmp.ZRq72LANX0", 00:28:10.861 "method": "keyring_file_add_key", 00:28:10.861 "req_id": 1 00:28:10.861 } 00:28:10.861 Got JSON-RPC error response 00:28:10.861 response: 00:28:10.861 { 00:28:10.861 "code": -1, 00:28:10.861 "message": "Operation not permitted" 00:28:10.861 } 00:28:10.861 06:14:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:10.861 06:14:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:10.861 06:14:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:10.861 06:14:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:10.861 06:14:26 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ZRq72LANX0 00:28:10.861 06:14:26 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZRq72LANX0 00:28:10.861 06:14:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZRq72LANX0 00:28:11.119 06:14:26 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ZRq72LANX0 00:28:11.119 06:14:26 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:11.119 06:14:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:11.119 06:14:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:11.119 06:14:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:11.119 06:14:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:11.119 06:14:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:11.377 06:14:27 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:11.377 06:14:27 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:11.377 06:14:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:11.377 06:14:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:11.377 06:14:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:11.377 06:14:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:11.377 06:14:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:11.377 06:14:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:11.377 06:14:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:11.377 06:14:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:11.635 [2024-07-11 06:14:27.342110] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ZRq72LANX0': No such file or directory 00:28:11.635 [2024-07-11 06:14:27.342198] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:11.635 [2024-07-11 06:14:27.342247] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:11.635 [2024-07-11 06:14:27.342260] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:11.636 [2024-07-11 06:14:27.342274] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:11.636 request: 00:28:11.636 { 00:28:11.636 "name": "nvme0", 00:28:11.636 "trtype": "tcp", 00:28:11.636 "traddr": "127.0.0.1", 00:28:11.636 "adrfam": "ipv4", 00:28:11.636 "trsvcid": "4420", 00:28:11.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.636 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:11.636 "prchk_reftag": false, 00:28:11.636 "prchk_guard": false, 00:28:11.636 "hdgst": false, 00:28:11.636 "ddgst": false, 00:28:11.636 "psk": "key0", 00:28:11.636 "method": "bdev_nvme_attach_controller", 00:28:11.636 "req_id": 1 00:28:11.636 } 00:28:11.636 Got JSON-RPC error response 00:28:11.636 response: 00:28:11.636 { 00:28:11.636 "code": -19, 00:28:11.636 "message": "No such device" 00:28:11.636 } 00:28:11.636 06:14:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:11.636 06:14:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:11.636 06:14:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:11.636 06:14:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:11.636 06:14:27 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:11.636 06:14:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:11.894 06:14:27 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:11.894 06:14:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:11.894 06:14:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:11.894 06:14:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:11.894 06:14:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:11.894 06:14:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:11.894 06:14:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MswYW5xerH 00:28:11.894 06:14:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:11.894 06:14:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:11.894 06:14:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.894 06:14:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:11.894 06:14:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:11.894 06:14:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:11.894 06:14:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:11.894 06:14:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MswYW5xerH 00:28:11.894 06:14:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MswYW5xerH 00:28:11.894 06:14:27 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.MswYW5xerH 00:28:11.894 06:14:27 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MswYW5xerH 00:28:11.894 06:14:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MswYW5xerH 00:28:12.152 06:14:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:12.152 06:14:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:12.410 nvme0n1 00:28:12.410 06:14:28 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:12.410 06:14:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:12.410 06:14:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:12.410 06:14:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.410 06:14:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.410 06:14:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:12.669 06:14:28 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:12.669 06:14:28 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:12.669 06:14:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:12.927 06:14:28 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:12.927 06:14:28 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:12.927 06:14:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:12.927 06:14:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.927 06:14:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:13.186 06:14:28 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:13.186 06:14:28 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:13.186 06:14:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:13.186 06:14:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:13.186 06:14:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:13.186 06:14:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:13.186 06:14:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:13.444 06:14:29 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:13.444 06:14:29 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:13.444 06:14:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:13.703 06:14:29 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:13.703 06:14:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:13.703 06:14:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:13.961 06:14:29 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:13.961 06:14:29 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MswYW5xerH 00:28:13.961 06:14:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MswYW5xerH 00:28:13.961 06:14:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pirCG1EwYs 00:28:13.961 06:14:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pirCG1EwYs 00:28:14.220 06:14:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:14.220 06:14:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:14.787 nvme0n1 00:28:14.787 06:14:30 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:14.787 06:14:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:15.047 06:14:30 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:15.047 "subsystems": [ 00:28:15.047 { 00:28:15.047 "subsystem": "keyring", 00:28:15.047 "config": [ 00:28:15.047 { 00:28:15.047 "method": "keyring_file_add_key", 00:28:15.047 "params": { 00:28:15.047 "name": "key0", 00:28:15.047 "path": "/tmp/tmp.MswYW5xerH" 00:28:15.047 } 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "method": "keyring_file_add_key", 00:28:15.047 "params": { 00:28:15.047 "name": "key1", 00:28:15.047 "path": "/tmp/tmp.pirCG1EwYs" 00:28:15.047 } 00:28:15.047 } 00:28:15.047 ] 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "subsystem": "iobuf", 00:28:15.047 "config": [ 00:28:15.047 { 00:28:15.047 "method": "iobuf_set_options", 00:28:15.047 "params": { 00:28:15.047 "small_pool_count": 8192, 00:28:15.047 "large_pool_count": 1024, 00:28:15.047 "small_bufsize": 8192, 00:28:15.047 "large_bufsize": 135168 00:28:15.047 } 00:28:15.047 } 00:28:15.047 ] 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "subsystem": "sock", 00:28:15.047 "config": [ 00:28:15.047 { 00:28:15.047 "method": "sock_set_default_impl", 00:28:15.047 "params": { 00:28:15.047 "impl_name": "uring" 00:28:15.047 } 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "method": "sock_impl_set_options", 00:28:15.047 "params": { 00:28:15.047 "impl_name": "ssl", 00:28:15.047 "recv_buf_size": 4096, 00:28:15.047 "send_buf_size": 4096, 00:28:15.047 "enable_recv_pipe": true, 00:28:15.047 "enable_quickack": false, 00:28:15.047 "enable_placement_id": 0, 00:28:15.047 "enable_zerocopy_send_server": true, 00:28:15.047 "enable_zerocopy_send_client": false, 00:28:15.047 "zerocopy_threshold": 0, 00:28:15.047 "tls_version": 0, 00:28:15.047 "enable_ktls": false 00:28:15.047 } 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "method": "sock_impl_set_options", 00:28:15.047 "params": { 00:28:15.047 "impl_name": "posix", 00:28:15.047 "recv_buf_size": 2097152, 00:28:15.047 "send_buf_size": 2097152, 00:28:15.047 "enable_recv_pipe": true, 00:28:15.047 "enable_quickack": false, 00:28:15.047 "enable_placement_id": 0, 00:28:15.047 "enable_zerocopy_send_server": true, 00:28:15.047 "enable_zerocopy_send_client": false, 00:28:15.047 "zerocopy_threshold": 0, 00:28:15.047 "tls_version": 0, 00:28:15.047 "enable_ktls": false 00:28:15.047 } 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "method": "sock_impl_set_options", 00:28:15.047 "params": { 00:28:15.047 "impl_name": "uring", 00:28:15.047 "recv_buf_size": 2097152, 00:28:15.047 "send_buf_size": 2097152, 00:28:15.047 "enable_recv_pipe": true, 00:28:15.047 "enable_quickack": false, 00:28:15.047 "enable_placement_id": 0, 00:28:15.047 "enable_zerocopy_send_server": false, 00:28:15.047 "enable_zerocopy_send_client": false, 00:28:15.047 "zerocopy_threshold": 0, 00:28:15.047 "tls_version": 0, 00:28:15.047 "enable_ktls": false 00:28:15.047 } 00:28:15.047 } 00:28:15.047 ] 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "subsystem": "vmd", 00:28:15.047 "config": [] 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "subsystem": "accel", 00:28:15.047 "config": [ 00:28:15.047 { 00:28:15.047 "method": "accel_set_options", 00:28:15.047 "params": { 00:28:15.047 "small_cache_size": 128, 00:28:15.047 "large_cache_size": 16, 00:28:15.047 "task_count": 2048, 00:28:15.047 "sequence_count": 2048, 00:28:15.047 "buf_count": 2048 00:28:15.047 } 00:28:15.047 } 00:28:15.047 ] 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "subsystem": "bdev", 00:28:15.047 "config": [ 00:28:15.047 { 00:28:15.047 "method": "bdev_set_options", 00:28:15.047 "params": { 00:28:15.047 "bdev_io_pool_size": 65535, 00:28:15.047 "bdev_io_cache_size": 256, 00:28:15.047 "bdev_auto_examine": true, 00:28:15.047 "iobuf_small_cache_size": 128, 00:28:15.047 "iobuf_large_cache_size": 16 00:28:15.047 } 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "method": "bdev_raid_set_options", 00:28:15.047 "params": { 00:28:15.047 "process_window_size_kb": 1024 00:28:15.047 } 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "method": "bdev_iscsi_set_options", 00:28:15.047 "params": { 00:28:15.047 "timeout_sec": 30 00:28:15.047 } 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "method": "bdev_nvme_set_options", 00:28:15.047 "params": { 00:28:15.047 "action_on_timeout": "none", 00:28:15.047 "timeout_us": 0, 00:28:15.047 "timeout_admin_us": 0, 00:28:15.047 "keep_alive_timeout_ms": 10000, 00:28:15.047 "arbitration_burst": 0, 00:28:15.047 "low_priority_weight": 0, 00:28:15.047 "medium_priority_weight": 0, 00:28:15.047 "high_priority_weight": 0, 00:28:15.047 "nvme_adminq_poll_period_us": 10000, 00:28:15.047 "nvme_ioq_poll_period_us": 0, 00:28:15.047 "io_queue_requests": 512, 00:28:15.047 "delay_cmd_submit": true, 00:28:15.047 "transport_retry_count": 4, 00:28:15.047 "bdev_retry_count": 3, 00:28:15.047 "transport_ack_timeout": 0, 00:28:15.047 "ctrlr_loss_timeout_sec": 0, 00:28:15.047 "reconnect_delay_sec": 0, 00:28:15.047 "fast_io_fail_timeout_sec": 0, 00:28:15.047 "disable_auto_failback": false, 00:28:15.047 "generate_uuids": false, 00:28:15.047 "transport_tos": 0, 00:28:15.047 "nvme_error_stat": false, 00:28:15.047 "rdma_srq_size": 0, 00:28:15.047 "io_path_stat": false, 00:28:15.047 "allow_accel_sequence": false, 00:28:15.047 "rdma_max_cq_size": 0, 00:28:15.047 "rdma_cm_event_timeout_ms": 0, 00:28:15.047 "dhchap_digests": [ 00:28:15.047 "sha256", 00:28:15.047 "sha384", 00:28:15.047 "sha512" 00:28:15.047 ], 00:28:15.047 "dhchap_dhgroups": [ 00:28:15.047 "null", 00:28:15.047 "ffdhe2048", 00:28:15.047 "ffdhe3072", 00:28:15.047 "ffdhe4096", 00:28:15.047 "ffdhe6144", 00:28:15.047 "ffdhe8192" 00:28:15.047 ] 00:28:15.047 } 00:28:15.047 }, 00:28:15.047 { 00:28:15.047 "method": "bdev_nvme_attach_controller", 00:28:15.047 "params": { 00:28:15.047 "name": "nvme0", 00:28:15.047 "trtype": "TCP", 00:28:15.047 "adrfam": "IPv4", 00:28:15.047 "traddr": "127.0.0.1", 00:28:15.047 "trsvcid": "4420", 00:28:15.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:15.047 "prchk_reftag": false, 00:28:15.047 "prchk_guard": false, 00:28:15.048 "ctrlr_loss_timeout_sec": 0, 00:28:15.048 "reconnect_delay_sec": 0, 00:28:15.048 "fast_io_fail_timeout_sec": 0, 00:28:15.048 "psk": "key0", 00:28:15.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:15.048 "hdgst": false, 00:28:15.048 "ddgst": false 00:28:15.048 } 00:28:15.048 }, 00:28:15.048 { 00:28:15.048 "method": "bdev_nvme_set_hotplug", 00:28:15.048 "params": { 00:28:15.048 "period_us": 100000, 00:28:15.048 "enable": false 00:28:15.048 } 00:28:15.048 }, 00:28:15.048 { 00:28:15.048 "method": "bdev_wait_for_examine" 00:28:15.048 } 00:28:15.048 ] 00:28:15.048 }, 00:28:15.048 { 00:28:15.048 "subsystem": "nbd", 00:28:15.048 "config": [] 00:28:15.048 } 00:28:15.048 ] 00:28:15.048 }' 00:28:15.048 06:14:30 keyring_file -- keyring/file.sh@114 -- # killprocess 91461 00:28:15.048 06:14:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 91461 ']' 00:28:15.048 06:14:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 91461 00:28:15.048 06:14:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:15.048 06:14:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:15.048 06:14:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91461 00:28:15.048 killing process with pid 91461 00:28:15.048 Received shutdown signal, test time was about 1.000000 seconds 00:28:15.048 00:28:15.048 Latency(us) 00:28:15.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.048 =================================================================================================================== 00:28:15.048 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.048 06:14:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:15.048 06:14:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:15.048 06:14:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91461' 00:28:15.048 06:14:30 keyring_file -- common/autotest_common.sh@967 -- # kill 91461 00:28:15.048 06:14:30 keyring_file -- common/autotest_common.sh@972 -- # wait 91461 00:28:15.983 06:14:31 keyring_file -- keyring/file.sh@117 -- # bperfpid=91716 00:28:15.983 06:14:31 keyring_file -- keyring/file.sh@119 -- # waitforlisten 91716 /var/tmp/bperf.sock 00:28:15.983 06:14:31 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 91716 ']' 00:28:15.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.983 06:14:31 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.983 06:14:31 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:15.983 06:14:31 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.983 06:14:31 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:15.983 06:14:31 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:15.983 06:14:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:15.983 06:14:31 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:15.983 "subsystems": [ 00:28:15.983 { 00:28:15.983 "subsystem": "keyring", 00:28:15.984 "config": [ 00:28:15.984 { 00:28:15.984 "method": "keyring_file_add_key", 00:28:15.984 "params": { 00:28:15.984 "name": "key0", 00:28:15.984 "path": "/tmp/tmp.MswYW5xerH" 00:28:15.984 } 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "method": "keyring_file_add_key", 00:28:15.984 "params": { 00:28:15.984 "name": "key1", 00:28:15.984 "path": "/tmp/tmp.pirCG1EwYs" 00:28:15.984 } 00:28:15.984 } 00:28:15.984 ] 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "subsystem": "iobuf", 00:28:15.984 "config": [ 00:28:15.984 { 00:28:15.984 "method": "iobuf_set_options", 00:28:15.984 "params": { 00:28:15.984 "small_pool_count": 8192, 00:28:15.984 "large_pool_count": 1024, 00:28:15.984 "small_bufsize": 8192, 00:28:15.984 "large_bufsize": 135168 00:28:15.984 } 00:28:15.984 } 00:28:15.984 ] 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "subsystem": "sock", 00:28:15.984 "config": [ 00:28:15.984 { 00:28:15.984 "method": "sock_set_default_impl", 00:28:15.984 "params": { 00:28:15.984 "impl_name": "uring" 00:28:15.984 } 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "method": "sock_impl_set_options", 00:28:15.984 "params": { 00:28:15.984 "impl_name": "ssl", 00:28:15.984 "recv_buf_size": 4096, 00:28:15.984 "send_buf_size": 4096, 00:28:15.984 "enable_recv_pipe": true, 00:28:15.984 "enable_quickack": false, 00:28:15.984 "enable_placement_id": 0, 00:28:15.984 "enable_zerocopy_send_server": true, 00:28:15.984 "enable_zerocopy_send_client": false, 00:28:15.984 "zerocopy_threshold": 0, 00:28:15.984 "tls_version": 0, 00:28:15.984 "enable_ktls": false 00:28:15.984 } 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "method": "sock_impl_set_options", 00:28:15.984 "params": { 00:28:15.984 "impl_name": "posix", 00:28:15.984 "recv_buf_size": 2097152, 00:28:15.984 "send_buf_size": 2097152, 00:28:15.984 "enable_recv_pipe": true, 00:28:15.984 "enable_quickack": false, 00:28:15.984 "enable_placement_id": 0, 00:28:15.984 "enable_zerocopy_send_server": true, 00:28:15.984 "enable_zerocopy_send_client": false, 00:28:15.984 "zerocopy_threshold": 0, 00:28:15.984 "tls_version": 0, 00:28:15.984 "enable_ktls": false 00:28:15.984 } 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "method": "sock_impl_set_options", 00:28:15.984 "params": { 00:28:15.984 "impl_name": "uring", 00:28:15.984 "recv_buf_size": 2097152, 00:28:15.984 "send_buf_size": 2097152, 00:28:15.984 "enable_recv_pipe": true, 00:28:15.984 "enable_quickack": false, 00:28:15.984 "enable_placement_id": 0, 00:28:15.984 "enable_zerocopy_send_server": false, 00:28:15.984 "enable_zerocopy_send_client": false, 00:28:15.984 "zerocopy_threshold": 0, 00:28:15.984 "tls_version": 0, 00:28:15.984 "enable_ktls": false 00:28:15.984 } 00:28:15.984 } 00:28:15.984 ] 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "subsystem": "vmd", 00:28:15.984 "config": [] 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "subsystem": "accel", 00:28:15.984 "config": [ 00:28:15.984 { 00:28:15.984 "method": "accel_set_options", 00:28:15.984 "params": { 00:28:15.984 "small_cache_size": 128, 00:28:15.984 "large_cache_size": 16, 00:28:15.984 "task_count": 2048, 00:28:15.984 "sequence_count": 2048, 00:28:15.984 "buf_count": 2048 00:28:15.984 } 00:28:15.984 } 00:28:15.984 ] 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "subsystem": "bdev", 00:28:15.984 "config": [ 00:28:15.984 { 00:28:15.984 "method": "bdev_set_options", 00:28:15.984 "params": { 00:28:15.984 "bdev_io_pool_size": 65535, 00:28:15.984 "bdev_io_cache_size": 256, 00:28:15.984 "bdev_auto_examine": true, 00:28:15.984 "iobuf_small_cache_size": 128, 00:28:15.984 "iobuf_large_cache_size": 16 00:28:15.984 } 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "method": "bdev_raid_set_options", 00:28:15.984 "params": { 00:28:15.984 "process_window_size_kb": 1024 00:28:15.984 } 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "method": "bdev_iscsi_set_options", 00:28:15.984 "params": { 00:28:15.984 "timeout_sec": 30 00:28:15.984 } 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "method": "bdev_nvme_set_options", 00:28:15.984 "params": { 00:28:15.984 "action_on_timeout": "none", 00:28:15.984 "timeout_us": 0, 00:28:15.984 "timeout_admin_us": 0, 00:28:15.984 "keep_alive_timeout_ms": 10000, 00:28:15.984 "arbitration_burst": 0, 00:28:15.984 "low_priority_weight": 0, 00:28:15.984 "medium_priority_weight": 0, 00:28:15.984 "high_priority_weight": 0, 00:28:15.984 "nvme_adminq_poll_period_us": 10000, 00:28:15.984 "nvme_ioq_poll_period_us": 0, 00:28:15.984 "io_queue_requests": 512, 00:28:15.984 "delay_cmd_submit": true, 00:28:15.984 "transport_retry_count": 4, 00:28:15.984 "bdev_retry_count": 3, 00:28:15.984 "transport_ack_timeout": 0, 00:28:15.984 "ctrlr_loss_timeout_sec": 0, 00:28:15.984 "reconnect_delay_sec": 0, 00:28:15.984 "fast_io_fail_timeout_sec": 0, 00:28:15.984 "disable_auto_failback": false, 00:28:15.984 "generate_uuids": false, 00:28:15.984 "transport_tos": 0, 00:28:15.984 "nvme_error_stat": false, 00:28:15.984 "rdma_srq_size": 0, 00:28:15.984 "io_path_stat": false, 00:28:15.984 "allow_accel_sequence": false, 00:28:15.984 "rdma_max_cq_size": 0, 00:28:15.984 "rdma_cm_event_timeout_ms": 0, 00:28:15.984 "dhchap_digests": [ 00:28:15.984 "sha256", 00:28:15.984 "sha384", 00:28:15.984 "sha512" 00:28:15.984 ], 00:28:15.984 "dhchap_dhgroups": [ 00:28:15.984 "null", 00:28:15.984 "ffdhe2048", 00:28:15.984 "ffdhe3072", 00:28:15.984 "ffdhe4096", 00:28:15.984 "ffdhe6144", 00:28:15.984 "ffdhe8192" 00:28:15.984 ] 00:28:15.984 } 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "method": "bdev_nvme_attach_controller", 00:28:15.984 "params": { 00:28:15.984 "name": "nvme0", 00:28:15.984 "trtype": "TCP", 00:28:15.984 "adrfam": "IPv4", 00:28:15.984 "traddr": "127.0.0.1", 00:28:15.984 "trsvcid": "4420", 00:28:15.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:15.984 "prchk_reftag": false, 00:28:15.984 "prchk_guard": false, 00:28:15.984 "ctrlr_loss_timeout_sec": 0, 00:28:15.984 "reconnect_delay_sec": 0, 00:28:15.984 "fast_io_fail_timeout_sec": 0, 00:28:15.984 "psk": "key0", 00:28:15.984 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:15.984 "hdgst": false, 00:28:15.984 "ddgst": false 00:28:15.984 } 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "method": "bdev_nvme_set_hotplug", 00:28:15.984 "params": { 00:28:15.984 "period_us": 100000, 00:28:15.984 "enable": false 00:28:15.984 } 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "method": "bdev_wait_for_examine" 00:28:15.984 } 00:28:15.984 ] 00:28:15.984 }, 00:28:15.984 { 00:28:15.984 "subsystem": "nbd", 00:28:15.984 "config": [] 00:28:15.984 } 00:28:15.984 ] 00:28:15.984 }' 00:28:16.243 [2024-07-11 06:14:31.992117] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:28:16.243 [2024-07-11 06:14:31.992299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91716 ] 00:28:16.243 [2024-07-11 06:14:32.163673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.502 [2024-07-11 06:14:32.354308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.762 [2024-07-11 06:14:32.620336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:17.031 [2024-07-11 06:14:32.736850] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:17.031 06:14:32 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.031 06:14:32 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:17.031 06:14:32 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:17.031 06:14:32 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:17.031 06:14:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.303 06:14:33 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:17.303 06:14:33 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:17.303 06:14:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:17.303 06:14:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:17.303 06:14:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:17.303 06:14:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.303 06:14:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:17.561 06:14:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:17.561 06:14:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:17.561 06:14:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:17.561 06:14:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:17.561 06:14:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:17.561 06:14:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:17.561 06:14:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.819 06:14:33 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:17.819 06:14:33 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:17.819 06:14:33 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:17.819 06:14:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:18.077 06:14:33 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:18.077 06:14:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:18.077 06:14:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.MswYW5xerH /tmp/tmp.pirCG1EwYs 00:28:18.077 06:14:33 keyring_file -- keyring/file.sh@20 -- # killprocess 91716 00:28:18.077 06:14:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 91716 ']' 00:28:18.077 06:14:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 91716 00:28:18.077 06:14:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:18.077 06:14:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:18.077 06:14:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91716 00:28:18.077 06:14:33 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:18.077 06:14:33 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:18.077 06:14:33 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91716' 00:28:18.077 killing process with pid 91716 00:28:18.077 Received shutdown signal, test time was about 1.000000 seconds 00:28:18.077 00:28:18.077 Latency(us) 00:28:18.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.077 =================================================================================================================== 00:28:18.077 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:18.077 06:14:33 keyring_file -- common/autotest_common.sh@967 -- # kill 91716 00:28:18.077 06:14:33 keyring_file -- common/autotest_common.sh@972 -- # wait 91716 00:28:19.453 06:14:34 keyring_file -- keyring/file.sh@21 -- # killprocess 91444 00:28:19.453 06:14:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 91444 ']' 00:28:19.453 06:14:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 91444 00:28:19.453 06:14:34 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:19.453 06:14:35 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:19.453 06:14:35 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91444 00:28:19.453 killing process with pid 91444 00:28:19.453 06:14:35 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:19.453 06:14:35 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:19.453 06:14:35 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91444' 00:28:19.453 06:14:35 keyring_file -- common/autotest_common.sh@967 -- # kill 91444 00:28:19.453 [2024-07-11 06:14:35.026006] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:19.453 06:14:35 keyring_file -- common/autotest_common.sh@972 -- # wait 91444 00:28:21.357 00:28:21.357 real 0m19.181s 00:28:21.357 user 0m44.238s 00:28:21.357 sys 0m2.971s 00:28:21.357 06:14:37 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:21.357 ************************************ 00:28:21.357 END TEST keyring_file 00:28:21.357 ************************************ 00:28:21.357 06:14:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:21.357 06:14:37 -- common/autotest_common.sh@1142 -- # return 0 00:28:21.357 06:14:37 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:28:21.357 06:14:37 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:21.357 06:14:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:21.357 06:14:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:21.357 06:14:37 -- common/autotest_common.sh@10 -- # set +x 00:28:21.357 ************************************ 00:28:21.357 START TEST keyring_linux 00:28:21.357 ************************************ 00:28:21.357 06:14:37 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:21.617 * Looking for test storage... 00:28:21.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8738190a-dd44-4449-9019-403e2a10a368 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=8738190a-dd44-4449-9019-403e2a10a368 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:21.617 06:14:37 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.617 06:14:37 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.617 06:14:37 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.617 06:14:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.617 06:14:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.617 06:14:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.617 06:14:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:21.617 06:14:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:21.617 /tmp/:spdk-test:key0 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:21.617 06:14:37 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:21.617 /tmp/:spdk-test:key1 00:28:21.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.617 06:14:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=91854 00:28:21.617 06:14:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 91854 00:28:21.617 06:14:37 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 91854 ']' 00:28:21.617 06:14:37 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.617 06:14:37 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.617 06:14:37 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.617 06:14:37 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.617 06:14:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:21.877 [2024-07-11 06:14:37.548565] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:28:21.877 [2024-07-11 06:14:37.548757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91854 ] 00:28:21.877 [2024-07-11 06:14:37.725789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.137 [2024-07-11 06:14:37.989679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.396 [2024-07-11 06:14:38.150521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:22.964 06:14:38 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.964 06:14:38 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:22.964 06:14:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:22.964 06:14:38 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.964 06:14:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:22.965 [2024-07-11 06:14:38.625036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.965 null0 00:28:22.965 [2024-07-11 06:14:38.657005] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:22.965 [2024-07-11 06:14:38.657292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:22.965 06:14:38 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.965 06:14:38 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:22.965 847829672 00:28:22.965 06:14:38 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:22.965 79553018 00:28:22.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.965 06:14:38 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=91873 00:28:22.965 06:14:38 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:22.965 06:14:38 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 91873 /var/tmp/bperf.sock 00:28:22.965 06:14:38 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 91873 ']' 00:28:22.965 06:14:38 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.965 06:14:38 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.965 06:14:38 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.965 06:14:38 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.965 06:14:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:22.965 [2024-07-11 06:14:38.794988] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 24.03.0 initialization... 00:28:22.965 [2024-07-11 06:14:38.795423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91873 ] 00:28:23.227 [2024-07-11 06:14:38.967832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.485 [2024-07-11 06:14:39.164894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.053 06:14:39 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.053 06:14:39 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:24.053 06:14:39 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:24.053 06:14:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:24.312 06:14:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:24.312 06:14:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:24.571 [2024-07-11 06:14:40.418450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:24.829 06:14:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:24.829 06:14:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:24.830 [2024-07-11 06:14:40.718319] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:25.088 nvme0n1 00:28:25.088 06:14:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:25.088 06:14:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:25.088 06:14:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:25.088 06:14:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:25.088 06:14:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:25.088 06:14:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:25.346 06:14:41 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:25.346 06:14:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:25.346 06:14:41 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:25.346 06:14:41 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:25.346 06:14:41 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:25.347 06:14:41 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:25.347 06:14:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:25.605 06:14:41 keyring_linux -- keyring/linux.sh@25 -- # sn=847829672 00:28:25.605 06:14:41 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:25.605 06:14:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:25.605 06:14:41 keyring_linux -- keyring/linux.sh@26 -- # [[ 847829672 == \8\4\7\8\2\9\6\7\2 ]] 00:28:25.605 06:14:41 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 847829672 00:28:25.605 06:14:41 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:25.605 06:14:41 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.605 Running I/O for 1 seconds... 00:28:26.982 00:28:26.982 Latency(us) 00:28:26.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.982 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:26.982 nvme0n1 : 1.02 6886.40 26.90 0.00 0.00 18403.39 8519.68 23116.33 00:28:26.982 =================================================================================================================== 00:28:26.982 Total : 6886.40 26.90 0.00 0.00 18403.39 8519.68 23116.33 00:28:26.982 0 00:28:26.982 06:14:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:26.982 06:14:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:26.982 06:14:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:26.982 06:14:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:26.982 06:14:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:26.982 06:14:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:26.982 06:14:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.982 06:14:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:27.241 06:14:43 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:27.241 06:14:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:27.241 06:14:43 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:27.241 06:14:43 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:27.241 06:14:43 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:28:27.241 06:14:43 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:27.241 06:14:43 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:27.241 06:14:43 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:27.241 06:14:43 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:27.241 06:14:43 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:27.241 06:14:43 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:27.241 06:14:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:27.501 [2024-07-11 06:14:43.316946] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:27.501 [2024-07-11 06:14:43.317589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (107): Transport endpoint is not connected 00:28:27.501 [2024-07-11 06:14:43.318570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (9): Bad file descriptor 00:28:27.501 [2024-07-11 06:14:43.319549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:27.501 [2024-07-11 06:14:43.319586] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:27.501 [2024-07-11 06:14:43.319609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:27.501 request: 00:28:27.501 { 00:28:27.501 "name": "nvme0", 00:28:27.501 "trtype": "tcp", 00:28:27.501 "traddr": "127.0.0.1", 00:28:27.501 "adrfam": "ipv4", 00:28:27.501 "trsvcid": "4420", 00:28:27.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:27.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:27.501 "prchk_reftag": false, 00:28:27.501 "prchk_guard": false, 00:28:27.501 "hdgst": false, 00:28:27.501 "ddgst": false, 00:28:27.501 "psk": ":spdk-test:key1", 00:28:27.501 "method": "bdev_nvme_attach_controller", 00:28:27.501 "req_id": 1 00:28:27.501 } 00:28:27.501 Got JSON-RPC error response 00:28:27.501 response: 00:28:27.501 { 00:28:27.501 "code": -5, 00:28:27.501 "message": "Input/output error" 00:28:27.501 } 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@33 -- # sn=847829672 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 847829672 00:28:27.501 1 links removed 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@33 -- # sn=79553018 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 79553018 00:28:27.501 1 links removed 00:28:27.501 06:14:43 keyring_linux -- keyring/linux.sh@41 -- # killprocess 91873 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 91873 ']' 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 91873 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91873 00:28:27.501 killing process with pid 91873 00:28:27.501 Received shutdown signal, test time was about 1.000000 seconds 00:28:27.501 00:28:27.501 Latency(us) 00:28:27.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.501 =================================================================================================================== 00:28:27.501 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91873' 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@967 -- # kill 91873 00:28:27.501 06:14:43 keyring_linux -- common/autotest_common.sh@972 -- # wait 91873 00:28:28.878 06:14:44 keyring_linux -- keyring/linux.sh@42 -- # killprocess 91854 00:28:28.878 06:14:44 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 91854 ']' 00:28:28.878 06:14:44 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 91854 00:28:28.878 06:14:44 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:28.878 06:14:44 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:28.878 06:14:44 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91854 00:28:28.878 killing process with pid 91854 00:28:28.879 06:14:44 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:28.879 06:14:44 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:28.879 06:14:44 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91854' 00:28:28.879 06:14:44 keyring_linux -- common/autotest_common.sh@967 -- # kill 91854 00:28:28.879 06:14:44 keyring_linux -- common/autotest_common.sh@972 -- # wait 91854 00:28:30.784 00:28:30.784 real 0m9.402s 00:28:30.784 user 0m16.624s 00:28:30.784 sys 0m1.595s 00:28:30.784 06:14:46 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.784 ************************************ 00:28:30.784 END TEST keyring_linux 00:28:30.784 ************************************ 00:28:30.784 06:14:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:30.784 06:14:46 -- common/autotest_common.sh@1142 -- # return 0 00:28:30.784 06:14:46 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:30.784 06:14:46 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:30.784 06:14:46 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:30.784 06:14:46 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:30.784 06:14:46 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:30.784 06:14:46 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:30.784 06:14:46 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:30.784 06:14:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:30.784 06:14:46 -- common/autotest_common.sh@10 -- # set +x 00:28:30.784 06:14:46 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:30.784 06:14:46 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:30.784 06:14:46 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:30.784 06:14:46 -- common/autotest_common.sh@10 -- # set +x 00:28:32.687 INFO: APP EXITING 00:28:32.687 INFO: killing all VMs 00:28:32.687 INFO: killing vhost app 00:28:32.687 INFO: EXIT DONE 00:28:33.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:33.254 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:33.254 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:33.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:33.822 Cleaning 00:28:33.822 Removing: /var/run/dpdk/spdk0/config 00:28:34.080 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:34.080 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:34.080 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:34.080 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:34.080 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:34.080 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:34.080 Removing: /var/run/dpdk/spdk1/config 00:28:34.080 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:34.080 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:34.080 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:34.080 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:34.080 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:34.080 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:34.080 Removing: /var/run/dpdk/spdk2/config 00:28:34.080 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:34.080 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:34.080 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:34.080 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:34.080 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:34.080 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:34.080 Removing: /var/run/dpdk/spdk3/config 00:28:34.080 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:34.080 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:34.080 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:34.080 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:34.080 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:34.080 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:34.080 Removing: /var/run/dpdk/spdk4/config 00:28:34.080 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:34.080 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:34.080 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:34.080 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:34.080 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:34.080 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:34.080 Removing: /dev/shm/nvmf_trace.0 00:28:34.080 Removing: /dev/shm/spdk_tgt_trace.pid59492 00:28:34.080 Removing: /var/run/dpdk/spdk0 00:28:34.080 Removing: /var/run/dpdk/spdk1 00:28:34.080 Removing: /var/run/dpdk/spdk2 00:28:34.080 Removing: /var/run/dpdk/spdk3 00:28:34.080 Removing: /var/run/dpdk/spdk4 00:28:34.080 Removing: /var/run/dpdk/spdk_pid59292 00:28:34.080 Removing: /var/run/dpdk/spdk_pid59492 00:28:34.080 Removing: /var/run/dpdk/spdk_pid59712 00:28:34.080 Removing: /var/run/dpdk/spdk_pid59806 00:28:34.080 Removing: /var/run/dpdk/spdk_pid59851 00:28:34.080 Removing: /var/run/dpdk/spdk_pid59985 00:28:34.080 Removing: /var/run/dpdk/spdk_pid60003 00:28:34.080 Removing: /var/run/dpdk/spdk_pid60151 00:28:34.080 Removing: /var/run/dpdk/spdk_pid60349 00:28:34.080 Removing: /var/run/dpdk/spdk_pid60502 00:28:34.080 Removing: /var/run/dpdk/spdk_pid60587 00:28:34.080 Removing: /var/run/dpdk/spdk_pid60680 00:28:34.080 Removing: /var/run/dpdk/spdk_pid60789 00:28:34.080 Removing: /var/run/dpdk/spdk_pid60878 00:28:34.080 Removing: /var/run/dpdk/spdk_pid60917 00:28:34.080 Removing: /var/run/dpdk/spdk_pid60954 00:28:34.080 Removing: /var/run/dpdk/spdk_pid61022 00:28:34.080 Removing: /var/run/dpdk/spdk_pid61128 00:28:34.080 Removing: /var/run/dpdk/spdk_pid61578 00:28:34.080 Removing: /var/run/dpdk/spdk_pid61642 00:28:34.080 Removing: /var/run/dpdk/spdk_pid61705 00:28:34.080 Removing: /var/run/dpdk/spdk_pid61721 00:28:34.080 Removing: /var/run/dpdk/spdk_pid61842 00:28:34.080 Removing: /var/run/dpdk/spdk_pid61858 00:28:34.080 Removing: /var/run/dpdk/spdk_pid61984 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62001 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62066 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62084 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62148 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62166 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62336 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62372 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62448 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62522 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62560 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62627 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62679 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62720 00:28:34.080 Removing: /var/run/dpdk/spdk_pid62761 00:28:34.081 Removing: /var/run/dpdk/spdk_pid62808 00:28:34.081 Removing: /var/run/dpdk/spdk_pid62849 00:28:34.339 Removing: /var/run/dpdk/spdk_pid62895 00:28:34.339 Removing: /var/run/dpdk/spdk_pid62936 00:28:34.339 Removing: /var/run/dpdk/spdk_pid62983 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63024 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63065 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63111 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63158 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63199 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63240 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63281 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63328 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63377 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63427 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63468 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63515 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63592 00:28:34.339 Removing: /var/run/dpdk/spdk_pid63708 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64028 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64041 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64084 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64115 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64137 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64174 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64199 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64227 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64258 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64283 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64311 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64342 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64373 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64395 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64426 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64457 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64479 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64516 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64541 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64569 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64611 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64637 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64678 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64749 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64795 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64816 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64857 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64884 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64898 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64958 00:28:34.339 Removing: /var/run/dpdk/spdk_pid64978 00:28:34.339 Removing: /var/run/dpdk/spdk_pid65024 00:28:34.339 Removing: /var/run/dpdk/spdk_pid65046 00:28:34.339 Removing: /var/run/dpdk/spdk_pid65067 00:28:34.339 Removing: /var/run/dpdk/spdk_pid65089 00:28:34.339 Removing: /var/run/dpdk/spdk_pid65116 00:28:34.339 Removing: /var/run/dpdk/spdk_pid65132 00:28:34.339 Removing: /var/run/dpdk/spdk_pid65159 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65179 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65221 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65265 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65281 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65327 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65354 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65374 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65426 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65451 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65495 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65520 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65544 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65559 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65584 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65604 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65629 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65643 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65729 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65816 00:28:34.340 Removing: /var/run/dpdk/spdk_pid65966 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66011 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66073 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66095 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66129 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66156 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66205 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66232 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66314 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66361 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66444 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66548 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66633 00:28:34.340 Removing: /var/run/dpdk/spdk_pid66685 00:28:34.608 Removing: /var/run/dpdk/spdk_pid66794 00:28:34.608 Removing: /var/run/dpdk/spdk_pid66854 00:28:34.608 Removing: /var/run/dpdk/spdk_pid66893 00:28:34.608 Removing: /var/run/dpdk/spdk_pid67141 00:28:34.608 Removing: /var/run/dpdk/spdk_pid67253 00:28:34.608 Removing: /var/run/dpdk/spdk_pid67298 00:28:34.608 Removing: /var/run/dpdk/spdk_pid67619 00:28:34.608 Removing: /var/run/dpdk/spdk_pid67658 00:28:34.608 Removing: /var/run/dpdk/spdk_pid67982 00:28:34.608 Removing: /var/run/dpdk/spdk_pid68394 00:28:34.608 Removing: /var/run/dpdk/spdk_pid68675 00:28:34.608 Removing: /var/run/dpdk/spdk_pid69508 00:28:34.608 Removing: /var/run/dpdk/spdk_pid70362 00:28:34.608 Removing: /var/run/dpdk/spdk_pid70491 00:28:34.608 Removing: /var/run/dpdk/spdk_pid70565 00:28:34.608 Removing: /var/run/dpdk/spdk_pid71858 00:28:34.608 Removing: /var/run/dpdk/spdk_pid72119 00:28:34.608 Removing: /var/run/dpdk/spdk_pid75399 00:28:34.608 Removing: /var/run/dpdk/spdk_pid75747 00:28:34.608 Removing: /var/run/dpdk/spdk_pid75857 00:28:34.608 Removing: /var/run/dpdk/spdk_pid75992 00:28:34.608 Removing: /var/run/dpdk/spdk_pid76026 00:28:34.608 Removing: /var/run/dpdk/spdk_pid76066 00:28:34.608 Removing: /var/run/dpdk/spdk_pid76100 00:28:34.608 Removing: /var/run/dpdk/spdk_pid76213 00:28:34.608 Removing: /var/run/dpdk/spdk_pid76357 00:28:34.608 Removing: /var/run/dpdk/spdk_pid76533 00:28:34.608 Removing: /var/run/dpdk/spdk_pid76627 00:28:34.608 Removing: /var/run/dpdk/spdk_pid76838 00:28:34.608 Removing: /var/run/dpdk/spdk_pid76940 00:28:34.608 Removing: /var/run/dpdk/spdk_pid77047 00:28:34.608 Removing: /var/run/dpdk/spdk_pid77379 00:28:34.608 Removing: /var/run/dpdk/spdk_pid77752 00:28:34.608 Removing: /var/run/dpdk/spdk_pid77761 00:28:34.608 Removing: /var/run/dpdk/spdk_pid79974 00:28:34.608 Removing: /var/run/dpdk/spdk_pid79984 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80276 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80291 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80312 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80349 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80356 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80446 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80450 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80554 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80568 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80672 00:28:34.608 Removing: /var/run/dpdk/spdk_pid80675 00:28:34.608 Removing: /var/run/dpdk/spdk_pid81078 00:28:34.608 Removing: /var/run/dpdk/spdk_pid81114 00:28:34.608 Removing: /var/run/dpdk/spdk_pid81219 00:28:34.608 Removing: /var/run/dpdk/spdk_pid81297 00:28:34.608 Removing: /var/run/dpdk/spdk_pid81612 00:28:34.609 Removing: /var/run/dpdk/spdk_pid81819 00:28:34.609 Removing: /var/run/dpdk/spdk_pid82216 00:28:34.609 Removing: /var/run/dpdk/spdk_pid82728 00:28:34.609 Removing: /var/run/dpdk/spdk_pid83561 00:28:34.609 Removing: /var/run/dpdk/spdk_pid84176 00:28:34.609 Removing: /var/run/dpdk/spdk_pid84179 00:28:34.609 Removing: /var/run/dpdk/spdk_pid86123 00:28:34.609 Removing: /var/run/dpdk/spdk_pid86201 00:28:34.609 Removing: /var/run/dpdk/spdk_pid86275 00:28:34.609 Removing: /var/run/dpdk/spdk_pid86342 00:28:34.609 Removing: /var/run/dpdk/spdk_pid86482 00:28:34.609 Removing: /var/run/dpdk/spdk_pid86549 00:28:34.609 Removing: /var/run/dpdk/spdk_pid86616 00:28:34.609 Removing: /var/run/dpdk/spdk_pid86677 00:28:34.609 Removing: /var/run/dpdk/spdk_pid87019 00:28:34.609 Removing: /var/run/dpdk/spdk_pid88187 00:28:34.609 Removing: /var/run/dpdk/spdk_pid88336 00:28:34.609 Removing: /var/run/dpdk/spdk_pid88585 00:28:34.609 Removing: /var/run/dpdk/spdk_pid89149 00:28:34.609 Removing: /var/run/dpdk/spdk_pid89308 00:28:34.609 Removing: /var/run/dpdk/spdk_pid89464 00:28:34.609 Removing: /var/run/dpdk/spdk_pid89565 00:28:34.609 Removing: /var/run/dpdk/spdk_pid89728 00:28:34.609 Removing: /var/run/dpdk/spdk_pid89841 00:28:34.609 Removing: /var/run/dpdk/spdk_pid90520 00:28:34.609 Removing: /var/run/dpdk/spdk_pid90552 00:28:34.609 Removing: /var/run/dpdk/spdk_pid90594 00:28:34.609 Removing: /var/run/dpdk/spdk_pid90952 00:28:34.609 Removing: /var/run/dpdk/spdk_pid90987 00:28:34.609 Removing: /var/run/dpdk/spdk_pid91019 00:28:34.609 Removing: /var/run/dpdk/spdk_pid91444 00:28:34.609 Removing: /var/run/dpdk/spdk_pid91461 00:28:34.609 Removing: /var/run/dpdk/spdk_pid91716 00:28:34.609 Removing: /var/run/dpdk/spdk_pid91854 00:28:34.609 Removing: /var/run/dpdk/spdk_pid91873 00:28:34.881 Clean 00:28:34.881 06:14:50 -- common/autotest_common.sh@1451 -- # return 0 00:28:34.881 06:14:50 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:34.881 06:14:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:34.881 06:14:50 -- common/autotest_common.sh@10 -- # set +x 00:28:34.881 06:14:50 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:34.881 06:14:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:34.881 06:14:50 -- common/autotest_common.sh@10 -- # set +x 00:28:34.881 06:14:50 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:34.881 06:14:50 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:34.881 06:14:50 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:34.881 06:14:50 -- spdk/autotest.sh@391 -- # hash lcov 00:28:34.881 06:14:50 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:34.881 06:14:50 -- spdk/autotest.sh@393 -- # hostname 00:28:34.881 06:14:50 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:35.140 geninfo: WARNING: invalid characters removed from testname! 00:29:01.682 06:15:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:05.866 06:15:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:08.404 06:15:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:10.937 06:15:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:14.223 06:15:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:17.509 06:15:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:20.043 06:15:35 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:20.043 06:15:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:20.043 06:15:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:20.043 06:15:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.043 06:15:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.043 06:15:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.043 06:15:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.043 06:15:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.043 06:15:35 -- paths/export.sh@5 -- $ export PATH 00:29:20.043 06:15:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.043 06:15:35 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:20.043 06:15:35 -- common/autobuild_common.sh@444 -- $ date +%s 00:29:20.043 06:15:35 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720678535.XXXXXX 00:29:20.043 06:15:35 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720678535.3GOIgZ 00:29:20.043 06:15:35 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:29:20.043 06:15:35 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:29:20.043 06:15:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:20.043 06:15:35 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:20.043 06:15:35 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:20.043 06:15:35 -- common/autobuild_common.sh@460 -- $ get_config_params 00:29:20.043 06:15:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:20.043 06:15:35 -- common/autotest_common.sh@10 -- $ set +x 00:29:20.043 06:15:35 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:29:20.043 06:15:35 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:29:20.043 06:15:35 -- pm/common@17 -- $ local monitor 00:29:20.043 06:15:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:20.043 06:15:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:20.043 06:15:35 -- pm/common@25 -- $ sleep 1 00:29:20.043 06:15:35 -- pm/common@21 -- $ date +%s 00:29:20.043 06:15:35 -- pm/common@21 -- $ date +%s 00:29:20.043 06:15:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720678535 00:29:20.043 06:15:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720678535 00:29:20.043 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720678535_collect-vmstat.pm.log 00:29:20.043 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720678535_collect-cpu-load.pm.log 00:29:20.981 06:15:36 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:29:20.981 06:15:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:20.981 06:15:36 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:20.981 06:15:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:20.981 06:15:36 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:20.981 06:15:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:20.981 06:15:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:20.981 06:15:36 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:20.981 06:15:36 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:20.981 06:15:36 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:21.239 06:15:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:21.239 06:15:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:21.239 06:15:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:21.239 06:15:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:21.239 06:15:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:21.239 06:15:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:21.239 06:15:36 -- pm/common@44 -- $ pid=93645 00:29:21.239 06:15:36 -- pm/common@50 -- $ kill -TERM 93645 00:29:21.239 06:15:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:21.239 06:15:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:21.239 06:15:36 -- pm/common@44 -- $ pid=93647 00:29:21.239 06:15:36 -- pm/common@50 -- $ kill -TERM 93647 00:29:21.239 + [[ -n 5162 ]] 00:29:21.239 + sudo kill 5162 00:29:21.249 [Pipeline] } 00:29:21.269 [Pipeline] // timeout 00:29:21.274 [Pipeline] } 00:29:21.291 [Pipeline] // stage 00:29:21.297 [Pipeline] } 00:29:21.315 [Pipeline] // catchError 00:29:21.325 [Pipeline] stage 00:29:21.327 [Pipeline] { (Stop VM) 00:29:21.342 [Pipeline] sh 00:29:21.624 + vagrant halt 00:29:25.810 ==> default: Halting domain... 00:29:31.160 [Pipeline] sh 00:29:31.440 + vagrant destroy -f 00:29:34.727 ==> default: Removing domain... 00:29:34.739 [Pipeline] sh 00:29:35.017 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:29:35.026 [Pipeline] } 00:29:35.049 [Pipeline] // stage 00:29:35.056 [Pipeline] } 00:29:35.077 [Pipeline] // dir 00:29:35.084 [Pipeline] } 00:29:35.105 [Pipeline] // wrap 00:29:35.111 [Pipeline] } 00:29:35.129 [Pipeline] // catchError 00:29:35.138 [Pipeline] stage 00:29:35.140 [Pipeline] { (Epilogue) 00:29:35.151 [Pipeline] sh 00:29:35.427 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:41.999 [Pipeline] catchError 00:29:42.001 [Pipeline] { 00:29:42.015 [Pipeline] sh 00:29:42.295 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:44.828 Artifacts sizes are good 00:29:44.836 [Pipeline] } 00:29:44.853 [Pipeline] // catchError 00:29:44.863 [Pipeline] archiveArtifacts 00:29:44.870 Archiving artifacts 00:29:45.045 [Pipeline] cleanWs 00:29:45.058 [WS-CLEANUP] Deleting project workspace... 00:29:45.058 [WS-CLEANUP] Deferred wipeout is used... 00:29:45.086 [WS-CLEANUP] done 00:29:45.088 [Pipeline] } 00:29:45.105 [Pipeline] // stage 00:29:45.110 [Pipeline] } 00:29:45.126 [Pipeline] // node 00:29:45.131 [Pipeline] End of Pipeline 00:29:45.164 Finished: SUCCESS